Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn the fundamentals of Hidden Markov Models (HMMs) in this comprehensive lecture from Johns Hopkins University's Summer School on Human Language Technology. Explore the mathematical foundations, algorithms, and applications of HMMs in natural language processing and speech recognition. Discover how these probabilistic models handle sequential data with hidden states, understand the forward-backward algorithm for computing probabilities, and examine the Viterbi algorithm for finding the most likely sequence of hidden states. Delve into parameter estimation techniques including the Baum-Welch algorithm and explore practical applications in part-of-speech tagging, speech recognition, and other language processing tasks. Gain insights into the theoretical underpinnings that make HMMs a cornerstone technique in computational linguistics and machine learning for sequential data analysis.
Syllabus
Jason Eisner: Hidden Markov Models
Taught by
Center for Language & Speech Processing(CLSP), JHU