Coding with Nvidia Jetson is not what you think. (Rather do this)
Автор: Shreyash Gupta
Загружено: 2025-09-03
Просмотров: 6375
Can the Nvidia Jetson Nano power local AI coding in your IDE? In this video, I test a real setup: an Ollama server on a Jetson Orion Nano 8GB, exposed via ngrok, wired into Cursor as a custom OpenAI endpoint. I show how to register models like Llama 3.2 3B and why larger ones (e.g., 7B+ coders, 20B) hit memory limits, plus what actually works vs what breaks in agent mode and code block generation. You’ll see live performance, GPU/RAM usage in jtop, what Cursor does and doesn’t support with local models, and a practical verdict on whether this is usable for day-to-day coding.
This is for homelab tinkerers, server nerds, and desk-setup enthusiasts who want to self-host models for coding help. Drop a comment with your Jetson or local AI setup, and subscribe if you want more hands-on homelab and networking experiments.
Song: Malibu
Composer: jiglr
License: Creative Commons (BY-SA 3.0) http://creativecommons.org/licenses/b...
Music powered by BreakingCopyright: https://breakingcopyright.com
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: