Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a systems-theoretic approach to analyzing hazards in military AI systems through this 46-minute conference talk from BSidesLV's PasswordsCon. Learn how AI systems can fail dangerously without explicit breakdowns and discover STPA-Sec, a specialized methodology for conducting hazard analysis in AI-enabled environments. Focus on the unique challenges posed by generative and predictive models within military contexts, where the stakes of system failure are particularly high. Examine systemic risks including misaligned recommendations, inadequate feedback loops, and interface ambiguity that can compromise autonomous systems. Gain practical strategies for identifying and controlling hidden hazards before they manifest into harmful outcomes, ensuring secure and assured autonomy in critical defense applications. Understand how traditional safety analysis methods fall short when applied to AI systems and why a systems-theoretic perspective is essential for comprehensive risk assessment in modern military technology.