Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The Preparing Text for AI Models course is designed for developers, engineers, and technical product builders who are new to Generative AI but already possess intermediate machine learning knowledge, basic Python proficiency, and familiarity with development environments such as VS Code, and who want to engineer, customize, and deploy open generative AI solutions while avoiding vendor lock-in.
The course equips learners with practical skills in dataset sourcing, preprocessing, and formatting for training large language models. Starting with the discovery of text datasets from repositories like Hugging Face, Kaggle, and Common Crawl, learners evaluate quality, relevance, and licensing considerations.
The course then covers preprocessing pipelines, including text cleaning, normalization, deduplication, and tokenization strategies, ensuring efficiency and compatibility with model training. Learners also design annotation schemas, apply semi-automated labeling techniques, and build validation workflows to maintain quality. The final module guides learners in constructing structured datasets for instruction tuning, fine-tuning, and benchmarking, supported by best practices in train-test splits and stratification. By the end of the course, learners will have created production-ready text datasets suitable for generative AI applications.