Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This 20-course specialization takes you from understanding generative Artificial Intelligence (AI) foundation models to deploying production-grade, multi-model systems on Amazon Web Services (AWS). You begin with Amazon Bedrock and Large Language Model (LLM) fundamentals, then progress through prompt architecture, Natural Language Processing (NLP) pipeline design, and AI orchestration patterns that bridge local and cloud inference. Intermediate courses cover enterprise AIOps with Amazon Q Business, AI security and governance with Bedrock Guardrails, performance engineering with Rust-based AWS Lambda functions, and deterministic LLM programming with quality metrics. Advanced courses introduce agentic AI with actor models, multi-modal development using screenshots as prompt context, privacy-conscious coding practices, data pipelines with Deno, and Model Context Protocol (MCP) agent design. The specialization concludes with conversational bot architecture, AI-powered code review automation via GitHub Actions, LLM security vulnerability analysis, production Software as a Service (SaaS) application development, and a capstone project deploying serverless multi-model systems with Cargo Lambda and Amazon Bedrock routing. Every course includes hands-on demonstrations in Rust and Python, automated testing, and containerized deployment workflows.
Syllabus
- Course 1: LLM Security and Vulnerabilities
- Course 2: CLI Automation with Amazon Q and CloudShell
- Course 3: AI-Powered Analytics and Performance Engineering
- Course 4: Deterministic LLM programming
- Course 5: Building deterministic MCP Agents
- Course 6: Enterprise AIOps with Amazon Q Business
- Course 7: Multi-modal AI
- Course 8: Prompt Architecture and NLP on Amazon Bedrock
- Course 9: Privacy-Conscious Development with AI Assistants
- Course 10: Agentic AI: Actor Models and Subagent Architecture
- Course 11: Build a Production SaaS Application with AI
- Course 12: AI Tooling Capstone: Serverless Multi-Model Systems
- Course 13: AI Debugging and Test-Driven fixes
- Course 14: AI Orchestration: From local models to cloud
- Course 15: AI Security and Governance on AWS
- Course 16: AWS Generative AI and Foundation Models
- Course 17: AWS Intelligent Applications with Amazon Bedrock
- Course 18: AI Code Review Automation with GitHub Actions
- Course 19: Conversational Bot Architecture with Rust and Deno
- Course 20: AI-Powered Data Pipelines with Deno
Courses
-
Learn to build AI-powered analytics pipelines on AWS using Amazon Bedrock, Lambda benchmarking, and Amazon Q for business intelligence. You will explore how Bedrock integrates with Rust for high-performance analytics, calling foundation model APIs from serverless architectures with token-level scaling. The course covers building Rust-Bedrock analytics pipelines that combine model invocation with data processing, and using generative AI to convert Python code to Rust for performance-critical workloads. You will construct intelligent code transformation pipelines that automate language migration, add performance instrumentation with GenAI, and build end-to-end AWS performance pipelines from instrumentation to analysis. The benchmarking module demonstrates real-world Lambda cost comparison between Python and Rust using synthetic Fortune 500 workloads, showing 10x cost differences at scale with three billion monthly invocations. You will use SageMaker DataWrangler for analytics data preparation and explore energy efficiency considerations for AI workloads. The Amazon Q module covers transforming raw data into living actionable insights through automatic anomaly detection, natural language processing that converts questions into SQL and Python queries, and CodeCatalyst dev environments for analytics projects. By completing this course, you will be able to build Rust-Bedrock analytics pipelines, benchmark Lambda performance for cost optimization, and use Amazon Q for AI-powered business intelligence.
-
Learn to build production agentic AI systems using actor model foundations, subagent architecture patterns, and multi-language implementations. You will explore the actor paradigm for concurrent computation, where isolated processes communicate through message-passing with zero shared memory, eliminating race conditions and deadlocks that crash production systems. The course covers Actix supervision trees in Rust for fault-tolerant actor recovery and location transparency for seamless distributed scaling. You will implement Claude subagent patterns for task-specific AI configurations with isolated state and tool access, and examine pmat subagent architecture for code quality analysis through specialized delegation pipelines. The subagent module demonstrates supervised multi-agent coordination, applies Amdahl's law to understand parallelization limits of subagent systems, and explains why simple agents often outperform complex multi-agent designs. You will also explore small language models as efficient alternatives for agent reasoning tasks. The hands-on module covers actor implementations in three languages: Deno with TypeScript, Go with goroutines and channels, and Rust with ownership-based memory safety. You will build Go supervisor patterns for automatic actor recovery and examine a complete agentic coding project repository. By completing this course, you will be able to design fault-tolerant agentic systems using actor model principles, implement subagent architectures with Claude, and build actor patterns across multiple programming languages.
-
Learn to build deterministic AI agents using the Model Context Protocol (MCP) and structured quality metrics for repeatable, verifiable outputs. You will explore PMAT as a quality assessment tool for software projects, applying lean manufacturing principles from the Toyota Way including continuous improvement and waste elimination to software quality engineering. The course covers the certainty-scope tradeoff for balancing test coverage and confidence, finite state machine models for deterministic agent behavior, and MCP protocol architecture for structured agent-tool communication. You will analyze survivorship bias in programming language popularity rankings and apply six essential quality metrics for comprehensive project assessment and automated scoring. The testing module covers six essential test types for agent validation, property-based testing for verifying behavioral invariants, and fuzz testing for discovering edge cases using agentic AI. You will use Claude Code as an MCP client integrated with PMAT for automated quality analysis and walk through real-world project examples demonstrating quality scoring across multiple codebases. By completing this course, you will be able to design deterministic agent systems using MCP, apply comprehensive quality metrics with PMAT, and implement property and fuzz testing strategies for robust agent validation.
-
Learn to automate AWS development workflows using Amazon Q and CloudShell for container deployment, infrastructure as code, and Rust development. You will explore Amazon Q as an AI-powered CLI assistant with inline completion in CloudShell's ZSH environment, generating commands for AWS resource management. The course covers Docker integration in CloudShell as a stealth feature, running containers directly in the browser-based shell with pre-installed Docker, checking available CPUs and memory, and building container workflows from pull to run. You will automate Docker build, tag, and push operations from CloudShell to Amazon ECR, including ECR authentication and image registry management. The CDK module demonstrates deploying Lambda functions with Amazon Q assistance, from CDK bootstrap through stack deployment, using Q chat to generate infrastructure as code configurations. You will develop Rust applications in CloudShell with Amazon Q inline completion in ZSH, exploring AI-powered code generation and debugging directly in the terminal. The course also covers the Docker-to-ECR architecture for production container deployments, building complete pipelines from CloudShell build to ECR push to Lambda deployment. By completing this course, you will be able to automate AWS workflows with Amazon Q in CloudShell, deploy containers to ECR, and build CDK infrastructure with AI assistance.
-
Learn to build production-grade LLM systems using AWS Bedrock, local inference toolchains, and systematic quality evaluation. You will explore retrieval-augmented generation (RAG) on AWS, configuring Bedrock knowledge bases with S3 data sources for document-grounded responses, and building Rust applications that interact with Bedrock model APIs. The course covers tokenization fundamentals, multi-model architectures for routing requests to appropriate foundation models, and the Bedrock knowledge agent workflow from data ingestion to response generation. You will compile llama.cpp with hardware-specific optimization flags, work with the GGUF file format for quantized model distribution, and deploy Qwen 2.5 Coder as a local coding assistant on AWS GPU instances. The local LLM toolchain module demonstrates Amdahl's law applied to parallel compilation, Bedrock provisioned throughput for dedicated model capacity, and prompt evaluation in the Bedrock console. You will use the UV package manager for Python dependency management in LLM projects and explore Amazon Q Developer for AI-assisted code generation and documentation. The course also covers SageMaker Canvas for no-code ML development, including dataset preparation and AutoML training. By completing this course, you will be able to design RAG pipelines on AWS, run optimized local LLM inference with llama.cpp, and evaluate LLM quality metrics for production deployments.
-
Learn to deploy and manage enterprise AI operations on AWS using Amazon Q Business, CloudShell, and Amazon Bedrock. You will explore Amazon Q Business as an enterprise AI assistant built on Bedrock with enterprise-grade security, natural language processing for conversational interactions, and data source connectors for organizational knowledge. The course covers AI-assisted CLI operations using CloudShell with Amazon Q, foundational CloudShell scripting patterns for Bedrock API exploration, and programmatic management of AI resources. You will implement cost control strategies using AWS cost anomaly detection to identify spending spikes, manage SageMaker resources efficiently, and understand diminishing returns in LLM scaling to guide cost-effective model selection. The enterprise MLOps framework module covers operationalizing machine learning pipelines with governance and repeatability. In the final module, you build enterprise AIOps patterns with Bedrock, design RAG workflows using knowledge bases backed by S3 data sources, and prototype models rapidly in the Bedrock console. By completing this course, you will be able to deploy Amazon Q Business for enterprise knowledge access, manage AI costs with CloudShell automation, and implement production AIOps workflows with Bedrock.
-
Identify, analyze, and defend against the security vulnerabilities that arise when Large Language Models (LLMs) are integrated into production applications. This course begins with how LLMs function in applications—tokenization, next-token prediction, and the architectural patterns that determine attack surface—then surveys real-world application types including Application Programming Interface (API)-based services, embedded-model deployments, and multi-model orchestration pipelines. You will examine each architecture's distinct security profile and the trade-offs that shape deployment decisions. The second module provides a systematic walkthrough of LLM-specific vulnerability categories: prompt injection, insecure output handling, model theft and replication through distillation, sensitive information disclosure, insecure plugin design, excessive agency, and denial-of-service attacks. For each vulnerability you will study the attack mechanism, analyze why LLM behavior makes it exploitable, and apply concrete defense patterns including input sanitization, output validation, permission boundaries, and rate limiting. A capstone assessment synthesizes these skills into an end-to-end security evaluation of an LLM-powered system.
-
Learn to build production applications by combining visual and textual inputs with AI coding tools. You will explore multi-modal programming where screenshots, images, and text serve as inputs for AI-assisted code generation, and set up development environments configured for visual AI workflows. The course covers prompt engineering with visual context to improve code generation accuracy, and hands-on development with GitHub Copilot in VS Code for inline suggestions and chat-based interactions. You will build a complete project using live reload and browser developer tools for rapid feedback between AI generation and visual output. The iterative development module teaches documentation-driven design where documentation guides AI toward desired outcomes, image-based iteration for refining generated code through visual comparison, and automated checks and validations that maintain quality through development cycles. You will learn to identify and overcome common iteration challenges including regression and context drift. The advanced module covers Model Context Protocol for connecting AI tools with external capabilities, Playwright for browser automation and visual testing, and Playwright MCP for AI-driven browser interactions that validate web applications directly. By completing this course, you will be able to convert screenshots into production code through iterative, automated, multi-modal AI workflows.
-
Learn to use AI coding assistants while maintaining privacy and security standards in production development workflows. You will explore privacy-conscious development principles, comparing web-based and command-line AI tool interfaces to understand their data handling characteristics and privacy implications. The course covers GitHub Advanced Security and Grype for automated vulnerability scanning, and hands-on AI-assisted code review using Claude Code to detect security issues including hardcoded passwords, exposed API keys, and common vulnerabilities. You will evaluate multiple AI tools including Windsurf and Gemini CLI, learning safe usage practices and secure prompting techniques that avoid exposing sensitive data. The security vulnerabilities module covers detecting and preventing SQL injection, path traversal in file handling, HTTP header misconfiguration, and vulnerable code patterns using AI analysis. You will implement security automation with GitHub Advanced Security code scanning, Dependabot for automated dependency updates, container scanning with Grype, and comprehensive security scanning pipelines for continuous vulnerability detection. By completing this course, you will be able to select and configure AI coding assistants based on privacy requirements, conduct AI-assisted security code reviews, and build automated security scanning pipelines that protect production applications.
-
Learn to design production prompt architectures and build advanced NLP tools on Amazon Bedrock. You will explore the token processing lifecycle from raw text input through tokenization to model output, then design reusable prompt templates with variable placeholders, version tracking, and A/B testing through Bedrock prompt management. The course covers prompt-as-code workflows that integrate prompt lifecycle management with existing DevOps pipelines, including programmatic prompt creation and invocation via the AWS CLI. In the second module, you build advanced NLP implementations using Bedrock agents with chain-of-thought prompting and the five-whys analysis methodology for root cause investigation. You construct NLP agent pipelines that decompose complex language tasks into multi-stage processing workflows, integrate Amazon Transcribe for speech-to-text input layers, and build custom Rust NLP tools including SVGen for AI-powered SVG diagram generation and an Ollama-Bedrock bridge for hybrid local-cloud NLP deployment. By completing this course, you will be able to architect versioned prompt systems, implement chain-of-thought agent workflows, and build production NLP tools in Rust.
-
Build an AI-powered code review bot from scratch and publish it to the GitHub Marketplace. This hands-on course walks you through the complete lifecycle of creating a GitHub Action that uses Large Language Models to automatically review pull requests and provide actionable feedback on code quality. You start by exploring why automated code review matters, examining real pull requests in complex projects, and understanding the architecture of AI review pipelines built on GitHub Actions. You then define review criteria using the pmat code quality analysis tool, study existing review actions as reference implementations, and develop prompt engineering strategies that produce useful AI feedback. In the implementation phase, you apply documentation-driven development to plan your action, build it with AI assistance, add tests, and refine through local testing strategies. You deploy the action to GitHub, use it on real pull requests, and confront practical challenges of generative AI including non-deterministic responses. The course concludes with writing clear action documentation and publishing your review bot to the GitHub Marketplace for community distribution.
-
Learn to debug software systematically using AI tools combined with test-driven development strategies. You will explore why AI debugging is useful for pattern recognition across large codebases, and understand the challenges with AI output including hallucination risks and the importance of verifying AI-generated suggestions against actual code behavior. The course covers project architecture analysis as a prerequisite for effective debugging, using documentation to provide AI tools with project-specific context that narrows suggestions and reduces hallucination. You will apply test-driven debugging where tests isolate buggy components, define bugs precisely through failing test cases, and verify fixes without regressions. The test-first approach demonstrates how writing a failing test before fixing a bug ensures the fix addresses the actual problem. The advanced module covers context gathering techniques that provide AI tools with logs, traces, and code history for accurate diagnosis, structured logging designed for both human and AI consumption, and finding debugging direction through contextual analysis rather than undirected AI queries. You will explore proactive bug hunting using AI to discover unknown defects by analyzing source code for potential issues ranked by severity. The course concludes with a complete framework integrating testing, context gathering, logging, and AI analysis into a unified debugging workflow. By completing this course, you will be able to combine test-driven development with AI-assisted debugging to find, reproduce, and fix bugs systematically.
-
Learn to orchestrate AI systems across local and cloud environments through hands-on infrastructure setup, model deployment, and workflow integration. You will build a prompt engineering pyramid from basic prompts to chain-of-thought reasoning implemented in Rust, then evaluate six decision factors for choosing between local and cloud models including latency, throughput, cost, and privacy. The course covers local AI infrastructure in depth: running Ollama with custom Modelfiles for task-specific assistants, deploying llamafile for zero-dependency portable inference, compiling Rust Candle with CUDA for GPU-accelerated local inference, and optimizing local RAG with caching strategies. You will configure a complete AI workstation with tmux for session management, nvidia-smi and Zenith for GPU monitoring, and NVIDIA GPU optimization. The final module covers cloud workflows including AWS Spot instances for cost-effective GPU compute, Hugging Face model discovery and download, and GitHub AI models integration. By completing this course, you will be able to set up local AI infrastructure, deploy models across local and cloud environments, and design orchestration workflows that balance cost, privacy, and performance.
-
Learn to design and implement comprehensive AI security architectures on AWS using Bedrock guardrails, CloudTrail auditing, and responsible AI practices. You will explore defense-in-depth security architecture across five scopes from consumer apps to self-trained models, following frameworks developed by AWS Security Specialists. The course covers IAM-based authentication patterns for AI service access, role-based authorization for Bedrock endpoints, and complete security architecture integrating identity, network, and application controls. You will implement continuous monitoring and logging for AI workloads using CloudTrail to create audit trails for every Bedrock API invocation, and build CloudTrail visualizations that reveal usage patterns and anomalies. The Bedrock guardrails module covers configurable safety controls including content filters, PII detection, and topic controls with real-time content classification at multiple severity levels. You will configure both input validation and output safety controls, define security boundaries, and test guardrails against adversarial edge cases. The course also covers Amazon Q security with authentication, data protection, and compliance monitoring, and SageMaker Clarify for bias detection, model explainability, and responsible AI governance. By completing this course, you will be able to design secure AI architectures, implement Bedrock guardrails for content safety, and apply responsible AI practices using SageMaker Clarify.
-
Build and deploy a production serverless multi-model Artificial Intelligence (AI) system on Amazon Web Services (AWS) that integrates Amazon Bedrock and Ollama for cloud and local Large Language Model (LLM) execution. This capstone course, the final course in the Applied AI Engineering specialization, synthesizes 19 courses of prior learning into a comprehensive engineering project. You will implement Rust-based LLM applications using the Cargo Lambda toolchain for serverless deployment on AWS Lambda, design Yet Another Markup Language (YAML)-driven prompt engineering workflows for structured configuration management, and build multi-model flow orchestration that routes requests to appropriate models based on task requirements. The course begins with multi-model architecture fundamentals covering the evolving AI model ecosystem, model selection criteria for production workloads, and multi-provider integration patterns that enable fallback and cost optimization. You then advance to serverless production deployment, implementing an Amazon Bedrock router for dynamic model selection and deploying Rust serverless functions with Cargo Lambda that offer cold start and memory advantages for AI workloads. The final capstone challenge requires you to integrate multi-model orchestration, YAML prompt configuration, and serverless deployment into a complete production system evaluated against performance, cost, and reliability standards.
-
Learn to build AI-powered data pipelines using Deno, a modern JavaScript and TypeScript runtime with built-in security and developer tooling. You will explore roadmap-driven development with agentic AI for automated project planning, and implement git pre-commit hooks and quality gates that enforce code standards before commits enter the repository. The course covers the Deno ecosystem including its module system with URL-based imports, standard library, and the distinction between proactive and reactive toolchains demonstrated through Deno and Ruchy comparisons. You will build data engineering workflows using the Deno task system, configuring task automation through deno.json for repeatable data processing pipelines. The task playbooks module demonstrates composing multiple tasks into end-to-end data pipelines and executing them with hands-on demonstrations. The production tooling module covers Deno compile for creating standalone executable binaries that run without a runtime installation, Deno doc for generating API documentation directly from TypeScript types, and Deno vendor for caching remote dependencies locally to ensure reproducible offline builds. By completing this course, you will be able to design, build, and deploy AI-powered data pipelines using Deno's built-in task system, compile to standalone binaries, and vendor dependencies for production reliability.
-
Learn to build generative AI solutions on AWS by working hands-on with Amazon Bedrock, Retrieval Augmented Generation pipelines, Amazon Q Developer, and open-source LLM toolchains. You will apply tokenization concepts to understand model pricing and context windows, construct RAG pipelines grounded in your own knowledge bases, and use the Bedrock SDK in Rust and Python to invoke foundation models programmatically. The course covers Amazon Q Developer for AI-assisted code generation, security scanning, and documentation workflows across VS Code and IntelliJ. You will compile llama.cpp with parallel build optimizations informed by Amdahl's Law, package models in the GGUF quantization format, and deploy open-source LLMs on AWS EC2 GPU instances. The course also introduces SageMaker Canvas for no-code visual machine learning and the UV Python packaging tool for dependency management. By completing this course, you will be able to evaluate trade-offs between managed AWS services, open-source toolchains, and no-code platforms for production generative AI workloads.
-
Learn to build intelligent applications with Amazon Bedrock through hands-on console exploration, API development, and autonomous agent construction. You will navigate the Bedrock model catalog, compare foundation models like Claude and Haiku side by side, and implement the Dracula pattern for cloud-to-local model portability using Ollama as a fallback. The course progresses from console-based prototyping to programmatic API development, where you build Bedrock clients in both Bash with curl and Rust with the AWS SDK. You will create and query knowledge bases backed by S3 data sources and Titan embedding models, using both the console and AWS CloudShell for programmatic management. The final module covers Bedrock agents — autonomous systems that plan and execute multi-step tasks using action groups, Lambda integration, and knowledge-base-backed RAG for grounded responses. By completing this course, you will be able to evaluate trade-offs between managed cloud models and local inference, build production Bedrock APIs, and construct agent workflows that combine reasoning with retrieval.
-
Learn to build and launch a complete Software as a Service (SaaS) application using AI-assisted development techniques. This course walks through the entire product lifecycle, from planning a Minimum Viable Product (MVP) to deploying a monetized Application Programming Interface (API) service. You will build a Python API using FastAPI, define data models, create documented endpoints, and verify behavior with an automated pytest test harness. The course covers Docker containerization from Dockerfile creation through container testing, automated builds via Continuous Integration (CI) pipelines, and publishing images to a container registry for production distribution. In the second module, you will build the go-to-market foundation: designing conversion-focused landing pages, structuring pricing tiers, deploying a marketing site to GitHub Pages, implementing API key authentication for metered access, and writing developer documentation that drives adoption. Throughout the course, Large Language Model (LLM) tools accelerate development from architecture planning through code generation. By completing this course, you will have the skills to take an AI-powered SaaS product from concept to production launch.
-
Build multi-platform conversational bots using Rust and Deno by applying architecture patterns that separate core logic from platform-specific bindings. You will design Cargo workspace structures for organizing multi-crate bot projects, implement async event loops with the Tokio runtime for concurrent conversation handling, and apply Rust's ownership and borrowing model to write memory-safe concurrent code without garbage collection. The course walks through a universal bot crate that provides platform-agnostic conversation logic using Rust traits and generics. You will connect this universal bot to Amazon Bedrock for Large Language Model (LLM) powered responses using Claude, build an interactive Command-Line Interface (CLI) for testing bot conversations, and deploy a Discord bot using Deno and TypeScript. Deno's built-in permissions, TypeScript support, and Web Standard APIs simplify bot deployment compared to traditional Node.js approaches. Each module includes hands-on demonstrations of real bot implementations, from basic CLI conversation loops to production Discord integrations. The final project synthesizes workspace architecture, async runtime patterns, and platform bindings into a complete multi-platform bot system.
Taught by
Alfredo Deza, Liam Parker and Noah Gift