Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The Gen AI market is expected to grow 46% . yearly till 2030 (Source: Statista). Gen AI engineers are high in demand. This program gives aspiring data scientists, machine learning engineers, and AI developers essential skills in Gen AI, large language models (LLMs), and natural language processing (NLP) employers need.
Gen AI engineers design systems that understand human language. They use LLMs and machine learning to build these systems.
During this program, you will develop skills to build apps using frameworks and pre-trained foundation models such as BERT, GPT, and LLaMA. You’ll use the Hugging Face transformers library, PyTorch deep learning library, RAG and LangChain framework to develop and deploy LLM NLP-based apps. Plus, you’ll explore tokenization, data loaders, language and embedding models, transformer techniques, attention mechanisms, and prompt engineering.
Through the series of short-courses in this specialization, you’ll also gain practical experience through hands-on labs and a project, which is great for interviews.
This program is ideal for gaining job-ready skills that GenAI engineers, machine learning engineers, data scientists and AI developers require. Note, you need a working knowledge of Python, machine learning, and neural networks.. Exposure to PyTorch is helpful.
Syllabus
- Course 1: Generative AI and LLMs: Architecture and Data Preparation
- Course 2: Gen AI Foundational Models for NLP & Language Understanding
- Course 3: Generative AI Language Modeling with Transformers
- Course 4: Generative AI Engineering and Fine-Tuning Transformers
- Course 5: Generative AI Advance Fine-Tuning for LLMs
- Course 6: Fundamentals of AI Agents Using RAG and LangChain
- Course 7: Project: Generative AI Applications with RAG and LangChain
Courses
-
This IBM course will equip you with the skills to implement, train, and evaluate generative AI models for natural language processing (NLP) using PyTorch. You will explore core NLP tasks, such as document classification, language modeling, and language translation, and gain a foundation in building small and large language models. You will learn how to convert words into features using one-hot encoding, bag-of-words, embeddings, and embedding bags, as well as how Word2Vec models represent semantic relationships in text. The course covers training and optimizing neural networks for document categorization, developing statistical and neural N-Gram models, and building sequence-to-sequence models using encoder–decoder architectures. You will also learn to evaluate generated text using metrics such as BLEU. The hands-on labs provide practical experience with tasks such as classifying documents using PyTorch, generating text with language models, and integrating pretrained embeddings like Word2Vec. You will also implement sequence-to-sequence models to perform tasks such as language translation. Enroll today to build in-demand NLP skills and start creating intelligent language applications with PyTorch.
-
Ready to explore the exciting world of generative AI and large language models (LLMs)? This IBM course, part of the Generative AI Engineering Essentials with LLMs Professional Certificate, gives you practical skills to harness AI to transform industries. Designed for data scientists, ML engineers, and AI enthusiasts, you’ll learn to differentiate between various generative AI architectures and models, such as recurrent neural networks (RNNs), transformers, generative adversarial networks (GANs), variational autoencoders (VAEs), and diffusion models. You’ll also discover how LLMs, such as generative pretrained transformers (GPT) and bidirectional encoder representations from transformers (BERT), power real-world language tasks. Get hands-on with tokenization techniques using NLTK, spaCy, and Hugging Face, and build efficient data pipelines with PyTorch data loaders to prepare models for training. A basic understanding of Python, PyTorch, and familiarity with machine learning and neural networks are helpful but not mandatory. Enroll today and get ready to launch your journey into generative AI!
-
This course provides a practical introduction to using transformer-based models for natural language processing (NLP) applications. You will learn to build and train models for text classification using encoder-based architectures like Bidirectional Encoder Representations from Transformers (BERT), and explore core concepts such as positional encoding, word embeddings, and attention mechanisms. The course covers multi-head attention, self-attention, and causal language modeling with GPT for tasks like text generation and translation. You will gain hands-on experience implementing transformer models in PyTorch, including pretraining strategies such as masked language modeling (MLM) and next sentence prediction (NSP). Through guided labs, you’ll apply encoder and decoder models to real-world scenarios. This course is designed for learners interested in generative AI engineering and requires prior knowledge of Python, PyTorch, and machine learning. Enroll now to build your skills in NLP with transformers!
-
The demand for technical generative AI (GenAI) skills is increasing, and businesses are actively seeking AI engineers who can work with large language models (LLMs). This IBM course is designed to build job-ready skills that can accelerate your AI career. In this course, you’ll explore transformers and key model frameworks and platforms, including Hugging Face and PyTorch. You’ll begin with a foundational framework for optimizing LLMs and quickly advance to fine-tuning generative AI models. You’ll also learn advanced techniques such as parameter-efficient fine-tuning (PEFT), low-rank adaptation (LoRA), quantized LoRA (QLoRA), and prompting. The hands-on labs will give you valuable, practical experience including loading, pretraining, and fine-tuning models using industry-standard tools. These skills are directly applicable in real-world AI roles and are great for showcasing in interviews. If you’re ready to take your AI career to the next level and strengthen your resume with in-demand Gen AI competencies, enroll today and start applying your new skills in just one week!
-
Business demand for technical gen AI skills is exploding, and AI engineers who can work with large language models (LLMs) are in high demand. This Fundamentals of Building AI Agents using RAG and LangChain course builds job-ready skills that will fuel your AI career. In this course, you’ll explore retrieval-augmented generation (RAG), prompt engineering, and LangChain concepts. You’ll learn about the RAG process, its applications, encoders and tokenizers, and the FAISS library for high-dimensional vector search. Then, you’ll apply in-context learning and advanced prompt engineering techniques, including prompt templates and example selectors, to generate accurate responses. You’ll also work with LangChain’s tools, components, document loaders, retrievers, chains, and agents to simplify LLM-based application development. Through hands-on labs, you’ll develop AI agents that integrate LLMs, LangChain, and RAG technologies. You will also complete a real-world project you can showcase in interviews. A comprehensive cheat sheet and glossary are included to reinforce your learning. Enroll today and build in-demand generative AI skills in just 8 hours!
-
"Fine-tuning large language models (LLMs) is essential for aligning them with specific business needs, improving accuracy, and optimizing performance. In today’s AI-driven world, organizations rely on fine-tuned models to generate precise, actionable insights that drive innovation and efficiency. This course equips aspiring generative AI engineers with the in-demand skills employers are actively seeking. You’ll explore advanced fine-tuning techniques for causal LLMs, including instruction tuning, reward modeling, and direct preference optimization. Learn how LLMs act as probabilistic policies for generating responses and how to align them with human preferences using tools such as Hugging Face. You’ll dive into reward calculation, reinforcement learning from human feedback (RLHF), proximal policy optimization (PPO), the PPO trainer, and optimal strategies for direct preference optimization (DPO). The hands-on labs in the course will provide real-world experience with instruction tuning, reward modeling, PPO, and DPO, giving you the tools to confidently fine-tune LLMs for high-impact applications. Build job-ready generative AI skills in just two weeks! Enroll today and advance your career in AI!"
Taught by
Ashutosh Sagar, Fateme Akbari, Joseph Santarcangelo, Kang Wang, Roodra Pratap Kanwar and Wojciech 'Victor' Fulmyk