Skip to content

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.

Notifications You must be signed in to change notification settings

ksm26/Improving-Accuracy-of-LLM-Applications

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to the "Improving Accuracy of LLM Applications" course! 🚀 The course provides a systematic approach to enhance the accuracy and reliability of your LLM applications.

📘 Course Summary

Many developers struggle with inconsistent results in LLM applications. 😓 This course is designed to address these challenges by offering hands-on experience in improving accuracy through evaluation, prompt engineering, self-reflection, and fine-tuning techniques.

What You’ll Do:

  1. 🧠 SQL Agent Development: Build a text-to-SQL agent and simulate situations where it hallucinates to begin the evaluation process.

  1. 📊 Evaluation Framework: Create a robust framework to systematically measure performance, including criteria for good evaluations, best practices, and developing an evaluation score.

  1. 🎯 Instruction Fine-tuning: Learn how instruction fine-tuning helps LLMs follow instructions more accurately and how memory fine-tuning embeds facts to reduce hallucinations.
  2. 🚀 Performance-Efficient Fine-tuning (PEFT): Discover advanced techniques like Low-Rank Adaptation (LoRA) and Mixture of Memory Experts (MoME) to reduce training time while improving model performance.
  3. 🔄 Iterative Fine-tuning: Go through an iterative process of generating training data, fine-tuning, and applying practical tips to increase model accuracy.

🔑 Key Points

  • 🛠️ Systematic Improvement: Learn development steps, from evaluation, prompting, self-reflection, and fine-tuning, to improve your model’s reliability and accuracy.
  • 🧠 Memory Tuning: Enhance your model's performance by embedding facts to reduce hallucinations.
  • 🐑 Llama Models: Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.

👩‍🏫 About the Instructors

  • 👩‍💼 Sharon Zhou: Co-Founder and CEO of Lamini, Sharon brings her expertise in LLM development and fine-tuning.
  • 👨‍💼 Amit Sangani: Senior Director of Partner Engineering at Meta, Amit shares valuable insights on engineering reliable LLM applications.

🔗 To enroll in the course or for further information, visit 📚 deeplearning.ai.

About

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published