Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udemy

The Complete Android 16 Course [Part 3] - Become a Master

via Udemy

Overview

Advanced Android Development with Google Maps, Machine Learning, YOLO & TensorFlow. Become the Master

What you'll learn:
  • Intermediate Android developers who already understand Android fundamentals and want to move into advanced, real-world app development.
  • Android developers interested in machine learning, computer vision, and AI-powered mobile applications.
  • Developers who want to integrate Google Maps and build real-world apps such as Uber-like location-based applications.
  • Machine learning beginners who want to apply ML concepts practically inside Android apps (no heavy math required).
  • Developers who want to create, train, and deploy custom ML models using TensorFlow Lite (TFLite).
  • Android engineers aiming to build object detection apps, including custom YOLO and SSD MobileNet models.
  • Students or professionals preparing for advanced Android, AI, or computer vision projects.
  • Developers looking to upgrade their portfolio with advanced Android + ML projects.

Welcome to Part 3 of the Android App Development Series, where we move into advanced Android engineering and on-device machine learning.

This course is built for developers who want to go beyond traditional CRUD-based apps and start developing intelligent, production-level Android applications that combine mapping systems, real-time data, and machine learning models.

You will begin by mastering advanced Google Maps integration, learning how to build Uber-style applications that handle live location tracking, camera movement, markers, polyline routing, distance calculations, and map-based UI optimization for real-world use cases.

Next, you will dive deep into Machine Learning on Android, focusing on end-to-end workflows rather than isolated concepts. You will learn how to:

  • Prepare and structure datasets for mobile ML

  • Train custom models for Android use cases

  • Convert and optimize models into TensorFlow Lite (TFLite)

  • Deploy and run ML models efficiently on Android devices

A major focus of this course is computer vision and object detection. You will work with industry-standard architectures such as SSD MobileNet and YOLO, learning:

  • Differences between detection models and when to use each

  • How to train custom object detection models from scratch

  • How to export and integrate these models into Android apps

  • How to perform real-time object detection using the device camera

You will also learn optimization techniques critical for mobile performance, including model size reduction, inference speed optimization, and resource management, ensuring your apps run smoothly on real devices.

This course is project-driven and implementation-focused. Every major concept is applied directly to Android, giving you a clear understanding of how machine learning, computer vision, and Android development work together in real products.

By the end of this course, you will have:

  • Built advanced, map-based Android applications

  • Implemented AI-powered features using on-device ML

  • Created and deployed custom TFLite object detection models

  • Developed real-time ML-powered Android apps ready for production

  • Significantly upgraded your Android and AI skill set

This is an advanced-level course and assumes prior knowledge of Kotlin, Android Studio, and Android fundamentals.

Taught by

Abbass Masri - Doc. Ali Alaeddine

Reviews

4.9 rating at Udemy based on 4 ratings

Start your review of The Complete Android 16 Course [Part 3] - Become a Master

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.