Key 1 - How Large Language Models Work: AI Models Explained
Автор: Duke Center for Computational Thinking
Загружено: 2025-10-31
Просмотров: 84
Discover how large language models actually work in this foundational video. LLMs are predictive models that run on computers; they generate text by predicting the next most probable word based on massive training data. Learn the crucial distinction between models running locally on your computer, on university servers like DukeGPT, or on external cloud servers like ChatGPT.
Key concepts covered:
How LLMs predict the next word using probability
Temperature settings and creative vs. deterministic responses
The difference between local models, Duke servers, and OpenAI’s infrastructure
Why LLMs are reactive, not proactive (they need prompts to respond)
How memory features are built through software engineering, not the model itself
Other videos in this series:
This is Key 1 of 8. Continue with Key 2 to learn how tools provide context to enhance LLM capabilities, or watch the full playlist to master AI fundamentals.
Who this is for: Anyone using ChatGPT, Claude, or other AI tools who wants to understand what’s actually happening under the hood. Perfect for educators, students, and professionals exploring AI integration.
#LLM #MachineLearning #ChatGPT #AIExplained #NeuralNetworks
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: