Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

AI Risk and Compliance: Audit and Governance Foundations

Board Infinity via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This advanced course provides a practical, end-to-end approach to governing, securing, and auditing AI systems in enterprise environments. Learners begin by examining adversarial threats to AI systems—including jailbreaks, prompt injection, data leakage, manipulation, and misinformation attacks—and practice structured red teaming using both manual and automated techniques. Participants learn how to analyze vulnerability severity and exploitability, prioritize remediation, and evaluate AI system readiness under adversarial conditions while communicating findings through clear, audit-ready documentation. The course then explores regulatory and governance frameworks, focusing on the EU AI Act and the NIST AI Risk Management Framework (Govern, Map, Measure, Manage). Learners analyze AI system classifications, risk tiers, and obligations, and apply NIST AI RMF principles across the AI lifecycle. The course also covers key legal and compliance risks, including copyright, licensing, and data usage concerns in training data and outputs, and guides learners in creating concise compliance documentation and policies aligned with EU AI Act and NIST AI RMF requirements. Learners dive into explainability for LLMs and other AI models, exploring challenges and techniques such as SHAP, LIME, and attention visualization. They apply these tools to generate human-readable explanations, and critically evaluate the faithfulness, reliability, and quality of these explanations for different stakeholders. Finally, the course turns to safety engineering and organizational governance, including implementing guardrails frameworks (e.g., Guardrails AI, NVIDIA NeMo) and using Presidio for PII detection, masking, and anonymization in AI and RAG pipelines. Learners assess Shadow AI risks and design governance strategies, monitoring, and control architectures that mitigate unsafe AI usage, document vulnerabilities, and support continuous regulatory compliance. Disclaimer: This is an independent educational resource created by Board Infinity for informational and educational purposes only. This course is not affiliated with, endorsed by, sponsored by, or officially associated with any company, organization, or certification body unless explicitly stated. The content provided is based on industry knowledge and best practices but does not constitute official training material for any specific employer or certification program. All company names, trademarks, service marks, and logos referenced are the property of their respective owners and are used solely for educational identification and comparison purposes.

Syllabus

  • Adversarial Robustness & Red Teaming AI Systems
    • In this module, learners dive into the adversarial threat landscape for modern AI systems and practice structured red teaming workflows. You will explore real-world AI threat models, including jailbreaks, prompt injection, leakage, and manipulation attacks, and distinguish benign failures from genuinely adversarial behavior. Through videos, readings, AI dialogues, and a hands-on lab using Giskard, you will learn how to execute automated red teaming, interpret vulnerability reports, and prioritize remediation actions. By the end of the module, you will be prepared to evaluate system readiness under adversarial conditions and document findings in an audit- and security-friendly format.
  • Regulatory Compliance: EU AI Act, NIST RMF & Copyright
    • This module focuses on the regulatory and risk-management frameworks that govern enterprise AI systems, with emphasis on the EU AI Act, the NIST AI Risk Management Framework (RMF), and key copyright and data usage issues. Learners will analyze EU AI Act risk tiers, high-risk obligations, conformity assessments, and post-market monitoring requirements. You will then map AI lifecycle activities to the NIST AI RMF functions and apply NIST-aligned risk assessment techniques. The module also examines training-data licensing, ownership of LLM outputs, enterprise liability, and unauthorized training risks. Through a lab and applied exercises, you will classify AI systems under the EU AI Act, map risks to NIST functions, and produce concise compliance documentation.
  • Explainability (XAI) & System Transparency
    • In this module, learners explore explainable AI (XAI) techniques and transparency practices for large language models and other complex systems. You will investigate why explainability is challenging for LLMs and compare leading XAI methods such as SHAP, LIME, and attention maps, including guidance on when to use each. The module then turns to stakeholder-facing communication, showing how to generate human-readable explanations and present them effectively to executives and regulators while maintaining faithfulness and reliability. Finally, you will design transparency workflows that satisfy governance and compliance requirements, including documentation of system and decision flows. A hands-on lab guides you through applying SHAP or LIME to a text classifier and drafting a transparency report suitable for audits.
  • Guardrails, PII Protection & Shadow AI Mitigation
    • This capstone module addresses practical governance controls for safe AI usage, focusing on guardrails frameworks, PII protection, and Shadow AI mitigation. Learners begin by implementing guardrails for safety and policy enforcement using Guardrails AI and NVIDIA NeMo, including rule-based and semantic guardrails and testing them against attacks. The module then introduces Microsoft Presidio for PII detection and anonymization, demonstrating how to detect, mask, and scrub sensitive data and integrate Presidio into RAG pipelines. Finally, you will examine Shadow AI risks in enterprises, monitoring and enforcement techniques, and organization-wide governance controls. A major lab ties these elements together by red teaming a chatbot with Giskard, implementing Guardrails and Presidio, and producing comprehensive evidence and documentation that serve as the practical course capstone.

Taught by

Board Infinity

Reviews

Start your review of AI Risk and Compliance: Audit and Governance Foundations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.