Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Generative AI for Audio and Images: Models and Applications offers an in-depth exploration of how modern generative models such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Transformers, and Diffusion models are used to create, manipulate, and enhance audio, image, and video content.
Learners examine the architectures, training processes, and use cases of these models across different modalities, gaining both conceptual understanding and practical insights through hands-on activities. The course also highlights the ethical and societal implications of generative AI, including bias, transparency, intellectual property, and the challenges of deepfake technologies.
By covering foundational theory as well as state-of-the-art approaches and applications, this course prepares learners to apply and develop generative AI creatively and responsibly for the audio and image modalities.
By the end of this course, learners will be able to:
Outline core concepts, challenges, and the history of AI-generated audio.
Analyze important foundational audio generation models, such as variational and vector quantized autoencoders (VAE and VQ-VAE)
Examine how these models integrate with the latest GenAI technologies to form hybrid, state-of-the-art transformer and diffusion-based audio generation systems,
Study the architecture and functionality of Generative Adversarial Networks (GANs), and their variations.
Implement and train GAN models for creating and enhancing visual content,
Explore cutting-edge techniques such as diffusion models and transformers for image and video creation.
Discuss the ethical considerations regarding generative AI for audio and images.