Testing LLMs on the NEW 16gb Raspberry Pi5 (Llama 11B & Qwen 14B)
Автор: Bijan Bowen
Загружено: 2025-01-14
Просмотров: 25299
Timestamps:
00:00 - Intro
01:28 - Unboxing
02:40 - Ollama Download
03:38 - Qwen 14B Test
07:13 - Ollama Glitch PSA
08:53 - Llama3.2 11B Vision
13:48 - Pi5 8GB Comparison
16:26 - Other SBC Comments
17:17 - Closing Thoughts
Experience the groundbreaking capabilities of the new 16GB Raspberry Pi 5 as we put it through its paces with large language models using Ollama. In this comprehensive test, we explore previously impossible territories for the Raspberry Pi - running sophisticated models like Qwen2.5 14B and the multimodal Llama 3.2 11B Vision.
Watch as we demonstrate real-world token generation speeds and compare performance metrics, showcasing how this RAM upgrade opens new doors for local AI processing. We directly contrast with the 8GB Pi's limitations, highlighting models that were previously out of reach but now run successfully on this new hardware.
Whether you're an AI enthusiast, a Raspberry Pi tinkerer, or just curious about the future of local LLM deployment, this video provides concrete data and practical insights into what's now possible with the 16GB Pi 5. Join us as we explore the expanding capabilities of local AI processing and what this means for the future of edge computing.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: