Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How I Trained an AI Model to Beat the 1990's Arcade Game Double Dragon

DevConf via YouTube

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore advanced distributed AI training techniques through a practical demonstration of training an AI model to master the classic 1990s arcade game Double Dragon in this 37-minute conference talk. Learn how to leverage Kubernetes and KubeRay on OpenShift to deploy game simulations across clusters, enabling rapid AI training through distributed computing. Discover the integration of KubeRay to enhance training processes and significantly reduce training time for reinforcement learning models including Deep Q-Network (DQN) and Proximal Policy Optimization (PPO). Witness practical applications of these technologies as AI agents master complex video games, demonstrating OpenShift's power and scalability for AI training. Master the practical steps for setting up and managing distributed training environments, optimizing resource usage, and achieving faster convergence times in AI model training. Understand the broader implications of these techniques beyond gaming, including applications in healthcare and autonomous driving. Gain knowledge to leverage Kubernetes and OpenShift in your own AI projects, fostering innovation and efficiency in large-scale AI operations.

Syllabus

How I trained an AI Model to Beat the 1990's Arcade Game Double Dragon - DevConf.US 2025

Taught by

DevConf

Reviews

Start your review of How I Trained an AI Model to Beat the 1990's Arcade Game Double Dragon

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.