Power BI Fundamentals - Create visualizations and dashboards from scratch
Launch Your Cybersecurity Career in 6 Months
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to practically evaluate large language models for code generation through this 33-minute conference talk from the Linux Foundation's Open Source Summit. Explore the real-world application of open-weight large language models in software development environments and discover how to assess their performance in live coding scenarios. Analyze failed cases to gain valuable insights into model limitations and understand current AI4SE benchmarking methodologies. Examine the existing challenges facing AI for software engineering and gain practical knowledge for implementing and evaluating LLMs in your own development workflows.
Syllabus
Practical Evaluation of LLMs for Code - Maliheh Izadi
Taught by
Linux Foundation