Run Official Gemma 3 QAT on CPU with Llama.CPP and Ollama
Автор: Fahd Mirza
Загружено: 3 апр. 2025 г.
Просмотров: 2 292 просмотра
This video is a step-by-step simple tutorial to install and run Gemma-3 12B model with llama.cpp in gguf format and QAT.
🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
https://bit.ly/fahd-mirza
Coupon code: FahdMirza
🚀 This video is sponsored by https://camel-ai.org/ which is an open-source community focused on building multi-agent infrastructures.
🔥 Buy Me a Coffee to support the channel: https://ko-fi.com/fahdmirza
#gemma3 #gemma12b #llamacpp #GEMMAQAT
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ YouTube: / @fahdmirza
▶ Blog: https://www.fahdmirza.com
RELATED VIDEOS:
▶ Resource https://huggingface.co/google/gemma-3...
All rights reserved © Fahd Mirza

Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: