Is Your AI Giving Wrong Answers? Test with RAGAS!
Автор: The AI Explorer
Загружено: 2025-09-18
Просмотров: 81
Want to know if your Retrieval-Augmented Generation (RAG) system is actually working well? In this video, I break down RAGAS (RAG Assessment) a framework that helps you evaluate and improve your RAG pipelines.
You’ll learn:
Why measuring RAG quality matters before going live
Key metrics: Faithfulness, Answer Relevancy, Context Precision, Context Recall
How to calculate precision@k and recall with examples
How to build a test set and set a quality bar for your AI system
A live Python demo showing RAGAS in action
By the end, you’ll know how to continuously monitor and improve your RAG system just like software CI/CD pipelines.
✨ Try the code and resources in the description to practice on your own dataset: https://github.com/trung-tlt/ragas-demo 
Timestamps:
00:00 Intro & recap of RAG
00:34 Why measure RAG quality
01:03 RAG assessment overview
02:28 Four key metrics explained
05:42 Answer vs. search evaluation
08:11 Example: parental leave policy
10:40 Building test sets
12:50 How to improve low scores
15:39 Recap & quality bar setup
16:29 Demo with Python code
20:36 Results & improvement
21:15 Wrap-up & call to action                
 
                Доступные форматы для скачивания:
Скачать видео mp4
- 
                                Информация по загрузке: