Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Prompt Engineering Generative AI & LLM Models Fundamentals

Whizlabs via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Prompt Engineering, Generative AI & LLM Models Fundamentals course is designed for learners who want to build a strong foundation in Large Language Models (LLMs), Generative AI concepts, and prompt engineering techniques. The course focuses on helping technical professionals and AI enthusiasts understand how modern generative AI systems work and how to effectively interact with and optimize these models for real-world applications. This course bridges the gap between theoretical knowledge of generative AI and practical techniques used to guide, evaluate, and improve LLM performance. Learners will explore how LLMs are trained on large datasets, how they generate responses, and how prompt engineering and fine-tuning techniques can be applied to improve the quality and reliability of AI outputs. This course facilitates learners with approximately 4:00–5:00 hours of video lectures, providing a comprehensive understanding of both core LLM concepts and practical prompt engineering strategies. The course is structured into 3 comprehensive modules, with each module further divided into focused technical lessons. To test learners’ understanding, every module includes quizzes and in-video knowledge checks. Enroll in our “Prompt Engineering, Generative AI & LLM Models Fundamentals” course to develop the skills needed to design effective prompts, understand LLM training processes, and apply advanced techniques used in modern generative AI systems. Modules Included in the Course Module 1: Foundations of Large Language Models and Generative AI Module 2: LLM Training, Optimization, and Evaluation Module 3: Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures This course is specifically designed for technical professionals, developers, AI practitioners, and learners interested in understanding the core mechanisms behind generative AI systems and LLM-based applications. By the end of this course, a learner will be able to: Understand the fundamental concepts of Large Language Models and Generative AI systems. Explain how LLMs are trained, optimized, and evaluated using different learning techniques and metrics. Apply prompt engineering and prompt design techniques to guide model outputs effectively. Understand advanced techniques such as fine-tuning, prompt tuning, and Retrieval-Augmented Generation (RAG) used to improve LLM performance.

Syllabus

  • Foundations of Large Language Models and Generative AI
    • Welcome to the module Foundations of Large Language Models and Generative AI. In this module, you will explore the core concepts behind Large Language Models (LLMs) and understand how Generative AI systems are designed and applied. We begin by introducing LLMs and their role within artificial intelligence and machine learning. You will learn what defines a Generative AI model and examine the key components that power these systems. Through a hands-on demo using HuggingFace, you will see how LLMs are applied to common NLP tasks such as text generation and classification. The module also highlights the importance of training data, including how LLMs are trained on large datasets and why data cleaning is critical for improving model performance and reliability. By the end of this module, you will have a clear understanding of how LLMs and Generative AI systems work, how they are trained, and the role of high-quality data in building effective AI solutions.
  • LLM Training, Optimization, and Evaluation
    • Welcome to the module LLM Training, Optimization, and Evaluation. In this module, you will dive deeper into how Large Language Models are trained, optimized, and assessed for performance and reliability. You will begin by understanding the fundamentals of LLM training and optimization, including how massive datasets and computational resources are used to build high-performing models. The module explores different learning techniques such as zero-shot, few-shot, instruction tuning, and Reinforcement Learning from Human Feedback (RLHF), helping you understand how models adapt to tasks with minimal examples. You will also learn about loss functions and how they guide model learning during training. The concept of LLM alignment is introduced to explain how models are tuned to produce safe, accurate, and human-aligned responses. On the evaluation side, you will examine key evaluation metrics, including perplexity, and understand how model quality is measured. The module highlights the critical role humans play in evaluating outputs and refining models, as well as the importance of GPUs in enabling large-scale model training. By the end of this module, you will have a strong understanding of how LLMs are trained, optimized, aligned, and evaluated in real-world AI systems.
  • Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures
    • Welcome to the module Prompt Engineering, Fine-Tuning, and Advanced LLM Architectures. In this module, you will focus on practical techniques for controlling, adapting, and enhancing Large Language Models to meet real-world requirements. You will start with Prompt Engineering, learning the fundamentals of prompt design and how prompt structure directly impacts model output. The module covers proven techniques for crafting effective prompts that improve accuracy, reasoning quality, and response consistency. A hands-on demo will help you see how small prompt changes can significantly influence LLM behavior. Next, you will explore LLM fine-tuning approaches, including prompt tuning and Parameter-Efficient Fine-Tuning (PEFT). You will understand how prompt-efficient methods such as P-Tuning adapt large models with minimal computational cost. The introduction to NVIDIA NeMo provides insight into frameworks used for customizing and optimizing enterprise-scale language models. Finally, you will examine Retrieval-Augmented Generation (RAG) architecture and learn how combining LLMs with external knowledge sources improves factual grounding and domain-specific performance. By the end of this module, you will understand how to design high-quality prompts, apply efficient fine-tuning techniques, and leverage advanced LLM architectures for scalable generative AI solutions.

Taught by

Whizlabs Instructor

Reviews

Start your review of Prompt Engineering Generative AI & LLM Models Fundamentals

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.