Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course introduces the Large Language Models (LLMs) and the Hugging Face ecosystem, combining conceptual understanding with hands-on implementation to help you build intelligent, language-driven systems. Whether you’re exploring AI for the first time or looking to deepen your understanding of modern NLP architectures, this course provides a clear and practical path into the world of transformer-based models and open-source innovation.
Through guided lessons and real-world demonstrations, you’ll explore how LLMs process language, learn from massive datasets, and generate context-aware responses. You’ll also gain hands-on experience using Hugging Face tools to load, evaluate, and fine-tune models, prepare datasets for NLP tasks, and build pipelines for classification, sentiment analysis, and question answering. The course culminates with a project that integrates fine-tuned models, external APIs, and a user interface to create a fully functional knowledge assistant.
By the end of this course, you will be able to:
• Understand transformer architecture and attention mechanisms that power modern LLMs.
• Differentiate between pre-training and fine-tuning approaches and apply them using Hugging Face tools.
• Compare open-source and proprietary LLMs, evaluating trade-offs in performance and accessibility.
• Prepare and tokenize datasets for efficient model training and evaluation.
• Build, test, and deploy NLP pipelines for real-world applications.
• Extend agents with external data sources and integrate APIs securely.
• Develop and test an end-to-end intelligent assistant powered by fine-tuned models.
This course is ideal for AI developers, data scientists, and ML enthusiasts who want to understand and apply LLMs using open-source frameworks. A basic understanding of Python and machine learning will be helpful, but not required.
Join us to explore the Introduction of large language models, master the Hugging Face ecosystem, and gain the practical skills to fine-tune, connect, and deploy intelligent systems that power the future of AI.