Courses from 1000+ universities
$7.2 billion in combined revenue since 2020. $8 billion in lost market value. This merger marks the end of an era in online education.
600 Free Google Certifications
Marketing
Cybersecurity
Machine Learning
Circuits and Electronics 1: Basic Circuit Analysis
Academic Writing Made Easy
Nutrition, Exercise and Sports
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore autonomous AI agents AutoGPT and BabyAGI, understanding their capabilities, risks, and integration with GPT-4 and LangChain for automated resource management and task execution.
Dive into LangChain's capabilities for integrating external documents and SQL with GPT-4, learning to chain multiple data sources and leverage advanced language models effectively.
Dive into the comprehensive world of Large Language Models, from GPT to LLaMA, covering model architectures, fine-tuning techniques, and practical implementation strategies for AI applications.
Dive into the technical implementation of self-instruct fine-tuning for Large Language Models using ALPACA, covering synthetic dataset generation and parallel task optimization.
Master the creation of synthetic instruction datasets using ChatGPT and GPT-4 for fine-tuning language models, covering data generation, multi-task training, and task decomposition techniques.
Master self-instruct fine-tuning techniques for LLMs, focusing on synthetic data generation using ChatGPT/GPT-4 to enhance model performance in specific tasks like summarization and translation.
Master efficient fine-tuning of large language models using PEFT and LoRA techniques, optimizing GPU memory usage through INT8 quantization and adapter-tuning for cost-effective model development.
Master PEFT LoRA techniques to efficiently fine-tune large language models on local GPUs through low-rank adaptation, reducing memory requirements while maintaining model performance.
Discover the fundamentals of fine-tuning T5 and Flan-T5 language models, exploring key differences, implementation steps, and practical applications using Hugging Face transformers.
Master building a multimodal AI system that combines Vision Transformers and language models to analyze images and generate contextual responses through practical implementation of BLIP-2 architecture.
Explore the comparative analysis of 12 AI language models in clinical settings, examining their effectiveness in processing health records and performance against specialized medical systems.
Master fine-tuning Vision Transformers (ViT) in PyTorch through hands-on coding, focusing on object identification implementation with practical examples and real-time coding demonstrations in Google Colab.
Master Vision Transformer implementation in PyTorch using pre-trained and fine-tuned ViT models for image classification, with hands-on practice in Google Colab and Huggingface integration.
Explore cutting-edge Vision Transformer technology for medical image classification, focusing on mammogram analysis and early breast cancer detection through advanced deep learning approaches.
Master advanced in-context learning techniques like Chain-of-Thought and ReAct to optimize ChatGPT's performance without expensive fine-tuning, based on latest research insights and practical applications.
Get personalized course recommendations, track subjects and courses with reminders, and more.