Local LLM Fine-tuning on Mac (M1 16GB)
Автор: Shaw Talebi
Загружено: 2024-07-29
Просмотров: 46079
💡 Get 30 (free) AI project ideas: https://30aiprojects.com/
Here, I show how to fine-tune an LLM locally using an M-series Mac. The example adapts Mistral 7b to respond to YT comments in my likeness.
📰 Blog: https://medium.com/towards-data-scien...
💻 GitHub Repo: https://github.com/ShawhinT/YouTube-B...
🎥 QLoRA: • 3 Ways to Make a Custom AI Assistant | RAG...
🎥 Fine-tuning with OpenAI: • 3 Ways to Make a Custom AI Assistant | RAG...
▶️ Series Playlist: • Large Language Models (LLMs)
More Resources:
[1] MLX: https://ml-explore.github.io/mlx/buil...
[2] Original code: https://github.com/ml-explore/mlx-exa...
[3] MLX community: https://huggingface.co/mlx-community
[4] Model: https://huggingface.co/mlx-community/...
[5] LoRA paper: https://arxiv.org/abs/2106.09685
--
Homepage: https://www.shawhintalebi.com/
Intro - 0:00
Motivation - 0:56
MLX - 1:57
GitHub Repo - 3:30
Setting up environment - 4:09
Example Code - 6:23
Inference with un-finetuned model - 8:57
Fine-tuning with QLoRA - 11:22
Aside: dataset formatting - 13:54
Running local training - 16:07
Inference with finetuned model - 18:20
Note on LoRA rank - 22:03
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: