Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This specialization features Coursera Coach! Learn interactively through real-time conversations to test your knowledge, challenge assumptions, and deepen understanding.
You will gain the skills to master AI and Large Language Models (LLMs), focusing on Generative AI (GenAI) and Retrieval-Augmented Generation (RAG). Through hands-on projects, you’ll build AI applications that process large datasets and generate meaningful outputs. By the end, you'll develop AI systems capable of understanding context, generating content, and integrating with various data sources.
The course begins with setting up your development environment and reviewing Python fundamentals. You’ll then learn about LLMs, GenAI, and RAG architecture, before moving on to building and fine-tuning AI models. Topics include prompt engineering, API integration, and creating chatbots, summarizers, and personalized AI models.
Designed for developers, data scientists, and AI enthusiasts, this course is for those with basic programming knowledge who want to explore advanced AI techniques.
By the end, you'll be able to set up your environment, build LLM-based applications, and integrate RAG for AI-driven solutions.
Syllabus
- Course 1: Foundations of AI, LLMs, and Development Environments
- Course 2: Advanced Prompt Engineering and Memory Management
- Course 3: Building and Fine-Tuning LLM Applications
Courses
-
This course features Coursera Coach! A smarter way to learn with interactive, real-time conversations that help you test your knowledge, challenge assumptions, and deepen your understanding as you progress through the course. This advanced course on Prompt Engineering and Memory Management offers you a deep dive into techniques that enhance the performance and interaction of Large Language Models (LLMs). Starting with the basics of prompt engineering, you will explore a variety of advanced strategies, from few-shot to zero-shot and chain-of-thought prompting. As you progress, you’ll dive into context and memory management, learning how LLMs retain and utilize memory for more sophisticated interactions. The course’s hands-on projects help you apply each technique, ensuring that you not only understand the theory but also gain practical experience with real-world scenarios. The course also covers retrieval-augmented generation (RAG), a cutting-edge method that integrates external data retrieval with generative AI to enhance model responses. Throughout the modules, you'll engage in building and optimizing complex workflows, from setting up memory management for chatbots to constructing a complete RAG pipeline. You'll explore its integration into user interfaces, making the final product both functional and user-friendly. This course is ideal for intermediate to advanced learners with a background in AI or programming. It focuses on individuals interested in refining their skills in AI model optimization, particularly in the areas of prompt design, memory management, and RAG application development. By the end of the course, you will be able to implement advanced prompting techniques, manage context and memory in LLMs, develop a functional RAG pipeline, and integrate these systems into interactive applications.
-
This course features Coursera Coach! A smarter way to learn with interactive, real-time conversations that help you test your knowledge, challenge assumptions, and deepen your understanding as you progress through the course. In this comprehensive course, you'll learn how to build and fine-tune large language models (LLMs) for real-world applications. Starting with fundamental concepts, you'll progress through hands-on projects that focus on document-based retrieval-augmented generation (RAG) systems, LangChain integration, and fine-tuning techniques. You'll gain the skills to build custom applications like a PDF RAG system, a voice assistant, and a YouTube video summarizer, with a focus on optimizing the retrieval and generation of content. With a blend of theoretical lessons and practical exercises, this course ensures you master both building and fine-tuning LLMs for various AI-driven tasks. You'll also dive deep into advanced fine-tuning methods like LoRA (Low-Rank Adaptation), learning to fine-tune models efficiently with minimal computational resources. Throughout the course, you'll implement real-world projects that integrate sophisticated LLM functionalities into usable applications. By the end of the course, you’ll be capable of deploying and fine-tuning LLMs for personalized tasks, giving you the tools to tackle complex AI challenges in your own projects. This course is designed for intermediate to advanced learners with prior programming experience. It’s perfect for those who want to deepen their understanding of LLMs and apply them to solve industry-specific problems. No prior experience with fine-tuning is required, though knowledge of Python and machine learning basics will be beneficial. By the end of the course, you will be able to build, fine-tune, and deploy LLM applications, including RAG systems, voice assistants, and specialized chatbots, using advanced techniques such as LoRA fine-tuning.
-
This course features Coursera Coach! A smarter way to learn with interactive, real-time conversations that help you test your knowledge, challenge assumptions, and deepen your understanding as you progress through the course. In this course, you will gain a comprehensive understanding of AI, Large Language Models (LLMs), and their development environments. You'll start by learning the foundational principles of AI and LLMs, followed by hands-on demonstrations of the projects you'll build, giving you practical experience. The course is structured progressively, from environment setup to deep dives into Python programming, culminating in the construction of LLMs using libraries like Hugging Face’s Transformers. As you work through each section, you'll be guided step-by-step with tutorials and practice exercises that reinforce key concepts. The journey includes setting up your development environment, mastering Python fundamentals, exploring deep learning and machine learning, and diving into the complexities of Generative AI. Key concepts such as the transformer architecture, self-attention mechanism, and using OpenAI APIs will be explored in detail. By completing each module, you will build your coding and problem-solving skills, progressively building toward more advanced techniques in AI development. This course is ideal for those wanting to break into AI and machine learning development. The target audience includes beginners with some basic understanding of programming, specifically those interested in AI applications. No prior experience in AI is required, though familiarity with Python will be beneficial. By the end of the course, you will be able to set up your development environment for AI projects, understand and implement LLMs using transformer architecture, create and deploy AI models, and integrate OpenAI’s models through API calls.
Taught by
Packt - Course Instructors