How Memory Systems Work in AI Agents (Explained) | Maria Benavente
Автор: ZeroEntropy
Загружено: 2026-01-02
Просмотров: 236
In this talk, Maria Benavente explains how memory is engineered for AI agents and why large language models require external memory layers to behave statefully. She breaks down parametric, contextual, extended, and long-term memory, then presents a practical blueprint for adaptive agent memory systems.
00:00 – Introduction to Agent Memory Systems (Maria Benavente)
01:03 – Why LLMs Are Stateless by Design
02:02 – What “Memory” Means in AI Systems
02:56 – Parametric vs Contextual Memory
03:54 – Extended Memory vs Long-Term Memory
04:51 – How External Memory Feeds the Context Window
05:46 – Blueprint for Adaptive Agent Memory
06:42 – Temporal Adaptation & Memory Decay
07:39 – Sustainable Memory Growth & Noise Filtering
08:37 – Granularity, Navigation & Context Awareness
09:38 – Overview of Agent Memory Architectures
10:31 – Production-Ready Systems: Mem0 & ZEPP
11:32 – MemGPT: Treating LLMs Like an Operating System
12:47 – AEMEM: Genetic & Self-Organizing Memory
13:39 – Mem0 Vector-Based Memory Architecture
14:32 – Hybrid Context Construction Strategy
15:24 – Memory Update Logic & Deduplication
16:26 – Mem0-G: Graph-Based Memory System
17:32 – Entity Extraction & Relationship Modeling
18:26 – Graph Traversal vs Semantic Search
19:26 – Mem0 vs Mem0-G: Choosing the Right System
20:30 – ZEPP Architecture Overview
21:36 – Temporal Knowledge Graph Design
22:38 – Semantic Entity Graphs & Temporal Metadata
23:32 – Community Graphs for Abstraction & Navigation
#zeroentropy
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: