Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Generative AI Foundations in Python

Packt via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course provides a clear and practical foundation in generative AI and large language models, combining theory with real-world application. It equips learners with the skills to implement and fine-tune models effectively, while emphasizing ethical and responsible AI use. Designed for professionals looking to harness the power of AI in their work, it simplifies complex concepts and offers actionable insights. Learners will explore foundational elements of transformer-based LLMs and diffusion models, gaining hands-on experience with Python projects to implement their knowledge. The course highlights how to fine-tune models and adapt them for various domains, giving learners the tools to deploy AI solutions responsibly. What sets this course apart is its combination of theoretical understanding with practical application, guiding learners through real-world challenges while maintaining an ethical focus. Ideal for developers, data scientists, and machine learning engineers, this course is designed for those with a basic understanding of machine learning and Python who wish to explore generative AI.

Syllabus

  • Understanding Generative AI An Introduction
    • In this section, we explore generative AI fundamentals, comparing GANs and transformers with traditional models, and emphasize ethical and practical applications in real-world scenarios.
  • A Closer Look at GANs
    • In this section, we explore GANs, diffusers, and transformers for image and text generation, focusing on their architectures, applications, and comparative strengths in creative and technical domains.
  • Tracing the Foundations of Natural Language Processing and the Impact of the Transformer
    • In this section, we explore the evolution of natural language processing, focusing on the transformer architecture's role in modern large language models and generative AI. Key concepts include self-attention mechanisms, sequence-to-sequence learning, and deep learning foundations.
  • Applying Pretrained Generative Models From Prototype to Production
    • In this section, we explore transitioning generative AI from prototyping to production, focusing on setting up a Python environment, deploying pretrained LLMs, and ensuring scalable, reliable model deployment for real-world applications.
  • Fine-Tuning Generative Models for Specific Tasks
    • In this section, we explore fine-tuning generative models for task-specific applications like Q&A. Key concepts include parameter-efficient techniques and brand-aligned response generation.
  • Understanding Domain Adaptation for Large Language Models
    • In this section, we explore domain adaptation for LLMs, focusing on techniques like LoRA to enhance model understanding of specialized financial language and evaluate performance using ROUGE metrics.
  • Mastering the Fundamentals of Prompt Engineering
    • In this section, we explore zero- and few-shot prompting, prompt-chaining, and RAG strategies to enhance LLM performance without fine-tuning, focusing on practical applications and accurate task execution.
  • Addressing Ethical Considerations and Charting a Path Toward Trustworthy Generative AI
    • In this section, we examine ethical norms, bias in generative AI, and strategies to minimize harm, emphasizing responsible development and trustworthy systems.

Taught by

Packt - Course Instructors

Reviews

Start your review of Generative AI Foundations in Python

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.