Setting up a local Large Language Model (LLM)
Автор: Danny Arends
Загружено: 2023-08-30
Просмотров: 8255
Learn how to setup a local Large Language Model on windows that is GPU accelerated. During the stream, I will show you how to set up your Windows environment for LLM development, including:
Installing Python, Notepad++, Visual Studio, CUDA, NumPy, and PyTorch
Configuring your environment for GPU acceleration
Downloading and loading a large language model
Creating a prompt template
Interacting with the large language model
We will also have some fun with prompt templates, trying out different ways to generate text and I will answer your questions regarding LLMs. In this stream I'll build a 'Snoop Dogg'-inator, the code can be found at Github: https://github.com/DannyArends/LLMstr...
Thanks for taking an interest in my channel 😄If you've made it this far down, support me by giving a like or subscribing.
00:00:00 Sound check
00:01:27 Overview for today
00:06:56 What is a Large Language Model (used for)
00:11:24 Installing Python and the package installer for Python (pip)
00:13:20 Testing our Python installation using CMD
00:15:01 Visual Studio Community Edition & Python development Workload
00:18:34 Install the CUDA development library
00:23:04 Python packages for LLMs (NumPy, PyTorch, LlamaCPP)
00:33:09 Picking an Large Language Model
00:48:14 Coding an LLM in Python (Step by Step)
01:00:21 Create a sentence embedding in Python
01:10:22 Create a LLM callback handler
01:18:47 Setup the LlamaCPP Large Language Model
01:30:12 Prompt Template and LLMchain
01:35:58 User Input and executing the LLMchain
01:37:30 Running the LLM
01:40:06 Next steps in LLM development
01:44:33 Thoughts about the next stream
#LLMsetup #GPUacceleratedLLM #LLMdevelopment #prompttemplates #largelanguagemodels #artificialintelligence #machinelearning #generativeai
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: