Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

A Framework to Assess Clinical Safety and Hallucination Rates of LLMs for Medical Text Summarization

Stanford University via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore a comprehensive framework for evaluating clinical safety and hallucination rates in large language models used for medical text summarization in this Stanford University MedAI Group Exchange Session. Learn about the critical challenges facing ambient voice technology (AVT) deployment in healthcare systems across the US, UK, and EU, where accuracy measurement and oversight mechanisms remain scientifically undefined despite widespread adoption. Discover how TORTUS AI developed a systematic approach to address these gaps, particularly in light of new MHRA regulations requiring post-market surveillance and real-world monitoring for AVT systems. Examine the methodology for transitioning from clinician-labeled data to automated monitoring systems, and understand the regulatory landscape surrounding AI-powered medical documentation tools. Gain insights from Dr. Dom Pimenta, CEO and co-founder of TORTUS AI, who brings unique perspectives as both an internal medicine physician/cardiologist and the NHS's first 'AI Attending,' drawing from experience conducting Europe's largest clinical trial of ambient voice technology.

Syllabus

MedAI #147: A framework to assess clinical safety and hallucination rates of LLMs | Dom Pimenta

Taught by

Stanford MedAI

Reviews

Start your review of A Framework to Assess Clinical Safety and Hallucination Rates of LLMs for Medical Text Summarization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.