Key 6 - Open-Source vs Closed AI: Privacy & Data Protection Explained
Автор: Duke Center for Computational Thinking
Загружено: 2025-10-31
Просмотров: 73
Where does your data go when you use AI? Understand the critical distinction between open-source models (Llama, Mistral) running on Duke servers versus commercial models (GPT, Claude) on external infrastructure. Learn about data retention policies, ChatGPT memory features, and how Duke’s educational licenses protect student and faculty data.
Key concepts covered:
Open-source models you can download vs. commercial API-based models
Duke server installations vs. external cloud providers
OpenAI’s data training policies and the “improve model” toggle
ChatGPT Edu’s automatic training opt-out for educational institutions
Memory features vs. data retention (they’re different!)
Choosing models based on data sensitivity (PHI, student records, confidential research)
Other videos in this series:
After Understanding AI & LLMs (Part 1) Key 1-5, explore Keys 6 and 7, Model Size & Parameters (Capabilities & Edge AI) and Reasoning Models (Chain-of-Thought & Complex Problem Solving)
Who this is for: University administrators, healthcare professionals, researchers, and anyone handling sensitive data who needs to understand AI privacy implications and compliance requirements.
#OpenSourceAI #DataPrivacy #AICompliance #Llama #ChatGPTEdu
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: