Популярное

Музыка Кино и Анимация Автомобили Животные Спорт Путешествия Игры Юмор

Интересные видео

2025 Сериалы Трейлеры Новости Как сделать Видеоуроки Diy своими руками

Топ запросов

смотреть а4 schoolboy runaway турецкий сериал смотреть мультфильмы эдисон
dTub
Скачать

Deep Dive Series on Training LLMs from Scratch

Автор: C-DAC

Загружено: 2025-08-04

Просмотров: 983

Описание:

We are happy to share the recording of the first session from the webinar series jointly organized by NVIDIA and C-DAC, Pune, focused on training large language models (LLMs) from scratch
This multi-part webinar series provides a step-by-step walkthrough of the complete process involved in training Large Language Models (LLMs).
1) Cluster Health Check using NCCL and MLPerf Benchmarks
2) Large-Scale Data Curation for LLM Training
3) Distributed & Stable LLM Training on Large Clusters
4) Post-training and Evaluation of Pre-trained LLMs
Sessions are scheduled every alternate Wednesday until September 3rd, 2025 (tentatively).
The 1st session focused on hardware and performance where we dive into various communication primitives and determine the gpu topology and in the end, we look into different ways of benchmarking the performance of the cluster using NCCL and MLPerf.

The resource related to the first session can be found here:
https://github.com/ayushbits/llm-deve...
contact [email protected] for any queries

#NPSF #GPU #CDACPune #HPCAI #AI #PARAMSiddhiAI

Deep Dive Series on Training LLMs from Scratch

Поделиться в:

Доступные форматы для скачивания:

Скачать видео mp4

  • Информация по загрузке:

Скачать аудио mp3

Похожие видео

Large-Scale Data Curation for LLM Training

Large-Scale Data Curation for LLM Training

Getting Started with CUDA and Parallel Programming | NVIDIA GTC 2025 Session

Getting Started with CUDA and Parallel Programming | NVIDIA GTC 2025 Session

EASIEST Way to Fine-Tune a LLM and Use It With Ollama

EASIEST Way to Fine-Tune a LLM and Use It With Ollama

Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)

Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)

Microchip Breakthrough: We're Moving Beyond Silicon

Microchip Breakthrough: We're Moving Beyond Silicon

Tec-Verse 2025 Driving Progress Through Innovation - Ashoka Session

Tec-Verse 2025 Driving Progress Through Innovation - Ashoka Session

Session 4: Deep Dive Series on Training LLMs from Scratch

Session 4: Deep Dive Series on Training LLMs from Scratch

Why Large Language Models Hallucinate

Why Large Language Models Hallucinate

Distributed and Stable LLM Training on a Large-Scale Cluster

Distributed and Stable LLM Training on a Large-Scale Cluster

Интернет в небе: Сергей

Интернет в небе: Сергей "Флеш" о том, как «Шахеды» и «Герберы» научились работать в одной связке

Пламен ПАСКОВ - ПРЯМОЙ ЭФИР

Пламен ПАСКОВ - ПРЯМОЙ ЭФИР

Реалити-проект Building the Cloud. Episode 10: Managed Databases

Реалити-проект Building the Cloud. Episode 10: Managed Databases

EASIEST Way to Train LLM Train w/ unsloth (2x faster with 70% less GPU memory required)

EASIEST Way to Train LLM Train w/ unsloth (2x faster with 70% less GPU memory required)

Tensor Cores in a Nutshell

Tensor Cores in a Nutshell

C-DAC Tech & Sports Fest 2025

C-DAC Tech & Sports Fest 2025

C-DAC Tech & Sports Fest 2025 Closing Ceremony

C-DAC Tech & Sports Fest 2025 Closing Ceremony

EASIEST Way to Fine-Tune a LLM and Use It With Ollama

EASIEST Way to Fine-Tune a LLM and Use It With Ollama

How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3

How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3

From Bottlenecks to Breakthroughs: Understanding GPU Performance with NVIDIA Tools

From Bottlenecks to Breakthroughs: Understanding GPU Performance with NVIDIA Tools

Session 5: Post-training and Evaluation of Pre-trained LLMs

Session 5: Post-training and Evaluation of Pre-trained LLMs

© 2025 dtub. Все права защищены.



  • Контакты
  • О нас
  • Политика конфиденциальности



Контакты для правообладателей: [email protected]