A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.coursera.org/learn/ml-clustering-and-retrieval/lecture/MfcBU/the-dendrogram below:

The dendrogram - Hierarchical Clustering & Closing Remarks

The dendrogram

Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python.

View Syllabus Skills You'll Learn

Algorithms, Data Mining, Machine Learning, Scalability, Unsupervised Learning, Sampling (Statistics), Probability Distribution, Statistical Modeling, Bayesian Statistics, Unstructured Data, Text Mining, Statistical Machine Learning, Statistical Inference, Applied Machine Learning, Big Data, Machine Learning Algorithms

Reviews

Filled Star Filled Star Filled Star Filled Star Half-Filled Star

4.7 (2,362 ratings)

TT

Oct 30, 2016

Filled Star Filled Star Filled Star Filled Star Filled Star

I really learn a lot in this course, although the materials are very difficult at first read, but Emily's explanation were clear and I would be able to get the idea after a few review.

UZ

Nov 28, 2016

Filled Star Filled Star Filled Star Filled Star Filled Star

This was another great course. I hope that the instructors indulge in a little bit more theory. Anyway it was a magnificent course. Hope the coming courses are as good as this one.

From the lesson

Hierarchical Clustering & Closing Remarks

In the conclusion of the course, we will recap what we have covered. This represents both techniques specific to clustering and retrieval, as well as foundational machine learning concepts that are more broadly useful.<p>We provide a quick tour into an alternative clustering approach called hierarchical clustering, which you will experiment with on the Wikipedia dataset. Following this exploration, we discuss how clustering-type ideas can be applied in other areas like segmenting time series. We then briefly outline some important clustering and retrieval ideas that we did not cover in this course.<p> We conclude with an overview of what's in store for you in the rest of the specialization.

Taught By

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4