Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udacity

Generative AI Fundamentals

via Udacity

Overview

Employ the abilities of Generative AI with a deep dive into fundamentals. This course examines how various models are developed, how they work, and how to use them to their full potential.

Syllabus

  • Introduction to Generative AI Foundations
    • Explore core principles, tools, and ethical use of Generative AI, and discover its real-world impact and foundational models powering creative applications.
  • Generative AI Overview
    • Explore the fundamentals of generative AI, its key modalities, advanced capabilities, and essential ethical considerations shaping responsible AI development.
  • Accessing OpenAI API Keys
  • Applications of Generative AI
    • Explore real-world applications of Generative AI, including LLM-assisted coding, and learn to prompt, validate, and improve AI-generated code and tests.
  • Introduction to Foundation Models
    • Discover foundation models: large, versatile AI systems trained on massive datasets that generalize across tasks, surpassing traditional models in scalability and adaptability.
  • Building Applications using Foundation Models
    • Learn to build text classifiers with foundation models, using zero-shot and few-shot prompt engineering for tasks like sentiment and spam detection, and evaluate classifier accuracy.
  • How Generative AI Works
    • Learn how generative AI creates new data with architectures like Transformers and diffusion models, and how training enables creativity, reasoning, and task-specific abilities.
  • Evaluating Generative AI Models
    • Learn how to assess generative AI using human evaluation, exact metrics, AI judges, and benchmarks, ensuring robust performance for open-ended, probabilistic model outputs.
  • Implementing Evaluations for Generative AI Models
    • Learn practical techniques to evaluate generative AI models, from Exact Match to ROUGE, semantic similarity, code correctness, Pass@k, and LLM-as-a-Judge scoring.
  • Neural Networks and Multilayer Perceptrons
    • Explore neural networks from perceptrons to multilayer perceptrons, learning how they adapt via training, gradient descent, and backpropagation to solve complex AI tasks.
  • Implementing Neural Networks using Pytorch
    • Learn to implement neural networks in PyTorch by mastering tensors, model building, loss functions, optimizers, data loading, and complete training loops for practical machine learning.
  • Model Interpretability and Ethics
    • Explore AI model interpretability and ethics, including bias, misinformation, environmental impact, and fairness for responsible development and deployment of AI technologies.
  • Generating Text using LLMs
    • Discover how LLMs generate text token by token using Hugging Face's Transformers, from tokenization to model use, and explore hands-on demos with efficient generation methods.
  • Role-Based Prompting
    • Explains the theory of using roles or personas to control the tone, style, and expertise of an LLM's output.
  • Implementing Role-Based Prompting with Python
    • Provides hands-on practice in iteratively developing a role-based prompt to create a believable historical figure persona.
  • Adapting Foundation Models
    • Learn to adapt foundation models for specialized tasks using prompt engineering, RAG, fine-tuning, model compression, and agentic AI tools for efficient, tailored AI solutions.
  • Applying PEFT on Foundation Models
    • Learn to efficiently customize foundation models with PEFT and SFT, using LoRA to teach LLMs new skills like spelling via hands-on data preparation and fine-tuning.
  • Post-Training Foundation Models
    • Explore post-training for foundation models, including supervised and preference fine-tuning, to align AI with human values, improve usability, and ensure responsible interactions.
  • Reinforcement Fine-tuning on Foundation Models
    • Learn to fine-tune LLMs for structured tasks like counting and spelling using GRPO and LoRA, applying reinforcement-based reward functions for targeted skill improvements.
  • Teaching an LLM to Count!
    • Teaching an LLM to count the number of letters in a word using GRPO.

Taught by

Brian Cruz

Reviews

5 rating at Udacity based on 9 ratings

Start your review of Generative AI Fundamentals

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.