Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Transformers and NLP: Fine-Tuning Models with Hugging Face

Board Infinity via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Transformers, Fine-Tuning, and Model Evaluation is designed for learners with deep learning and NLP experience who want to master transformer architectures, fine-tune pre-trained models using Hugging Face, and deploy production-ready NLP solutions. You'll begin by exploring the transformer architecture in depth — including self-attention mechanisms, positional encodings, and model families like BERT, GPT, and T5. Next, you'll learn to prepare datasets, fine-tune models for classification tasks, and evaluate results using metrics like F1, precision, and confusion matrices. The third module covers reproducibility and version control using DVC and Git, along with publishing models to the Hugging Face Hub. Finally, you'll build and deploy transformer inference APIs using FastAPI, optimize performance through quantization, and integrate CI/CD practices for production systems. By the end of this course, you will: - Apply transformer architectures to solve real-world NLP tasks - Fine-tune and evaluate pre-trained models using Hugging Face Transformers and Datasets - Build reproducible ML pipelines with DVC and Git version control - Deploy and test transformer-based inference APIs using FastAPI Disclaimer: This is an independent educational resource created by Board Infinity for informational and educational purposes only. This course is not affiliated with, endorsed by, sponsored by, or officially associated with any company, organization, or certification body unless explicitly stated. The content provided is based on industry knowledge and best practices but does not constitute official training material for any specific employer or certification program. All company names, trademarks, service marks, and logos referenced are the property of their respective owners and are used solely for educational identification and comparison purposes.

Syllabus

  • Transformer Architecture and Foundations
    • Covers the key components and mechanisms of transformer architecture including self-attention, embeddings, and model families for modern NLP tasks. Duration: 4 hours.
  • Fine-Tuning Pre-Trained Models
    • Focuses on loading and preparing datasets using Hugging Face Datasets, fine-tuning transformer models for classification tasks, and evaluating performance using standardized metrics and confusion matrices. Duration: 4 hours.
  • Evaluation, Reproducibility, and Version Control
    • Covers implementing reproducible ML pipelines using DVC, tracking models and metrics with Git and DVC, and sharing results on the Hugging Face Hub. Duration: 4 hours.
  • Deployment and Integration of Transformer Models
    • Covers deploying fine-tuned transformers as APIs with FastAPI, evaluating deployed endpoints for accuracy and latency, and integrating monitoring and testing for production readiness. Duration: 4 hours.

Taught by

Board Infinity

Reviews

Start your review of Transformers and NLP: Fine-Tuning Models with Hugging Face

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.