Cut AI Costs by 4000x Using Semantic Caching: The Valkey Story
Автор: Appsmith
Загружено: 2025-09-01
Просмотров: 2734
When Redis abandoned its open source license, it sent shockwaves through the developer community. But what most people missed was the opportunity it created for something better.
In this revealing episode of The AI Smiths, @KevinBlancoZ sits down with Roberto Luna Rojas, Senior Developer Advocate at @awsdevelopers for @valkeyproject (Linux Foundation), to break down the real story behind Redis's controversial move and how it's actually solving AI's biggest cost problem.
🎯 What You'll Learn:
How companies are bleeding money on repetitive LLM calls
Why semantic caching can make your AI app 4000x faster
The difference between vendor-controlled vs. foundation-governed projects
How Valkey immediately implemented features Redis had rejected
COMMUNITY
— — — — — — — — — — — — — — — — — —
🧑🏽💻 Join the Community: https://community.appsmith.com/
🙋🏽 Get Support on Discord: / discord
⭐️ Star on Github: https://github.com/appsmithorg/appsmith
🌐 Follow on 𝕏: / theappsmith
🌐 Connect on LI: / appsmith
✨ Video tags - #AI #Redis #Valkey #OpenSource #InMemoryDatabase #LinuxFoundation #SemanticCaching #VectorSearch #LLM #LLMCosts #AICosts #MachineLearning #AIOptimization #devrel
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: