Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Multi-Domain Large Language Model Adaptation Using Synthetic Data Generation

Weights & Biases via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how Shell researchers tackle the challenge of preserving institutional knowledge by adapting large language models for domain-specific applications in this 18-minute conference talk from Fully Connected London. Discover Shell's approach to fine-tuning off-the-shelf LLMs that lack understanding of domain-specific language, as NLP Researcher Injy Sarhan and Senior Researcher Avanindra Singh detail their development of a research assistant designed to make research more efficient. Explore their comprehensive domain ingestion pipeline built with NVIDIA Nemo Curator and W&B Weave, covering essential processes including data preprocessing, domain adaptation, instruction tuning, and evaluation methodologies. Understand how domain-adapted LLMs achieve superior domain-specific reasoning capabilities and improved factual accuracy, while examining how W&B Weave's LLM-as-judge functionality and feedback loops successfully aligned manual and auto-generated benchmarks to ensure model performance meets organizational standards.

Syllabus

Multi-domain large language model adaptation using synthetic data generation - Shell @ FC London '25

Taught by

Weights & Biases

Reviews

Start your review of Multi-Domain Large Language Model Adaptation Using Synthetic Data Generation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.