How to Keep GPUs Fed: AI Factory Insights From NVIDIA, Dell, CoreWeave, VAST Data & Solidigm
Автор: Solidigm
Загружено: 2025-10-27
Просмотров: 110
AI factories are here, and the rules of data center design have changed. In this panel session hosted by Solidigm and moderated by TechArena, leaders from NVIDIA, Dell, CoreWeave, and VAST Data share the most urgent lessons learned building high-performance AI infrastructure at scale.
This panel breaks down the 5 critical keys to keeping GPUs fed and utilization high, across liquid cooling, rack-scale architecture, data orchestration, checkpointing, and storage efficiency.
What you’ll learn in this video:
• Why the industry is shifting from server-level to rack- and row-scale AI system design
• How storage and interconnect speed directly drive GPU utilization
• Why data governance and unstructured data management are new AI bottlenecks
• The rise of liquid-cooled, multi-megawatt AI racks
• Why minimizing over-fetch and tail latency is essential for energy-efficient performance
Featuring AI infrastructure leaders:
• NVIDIA: CJ Newburn
• Dell: Peter Corbett
• CoreWeave: Jacob Yundt
• VAST Data: Glenn Lockwood
• Solidigm: Alan Bumgarner
If you’re responsible for building, deploying, or scaling GPUs for training, inference, or RAG pipelines, this conversation is a must-watch.
Learn more about Solidigm SSDs for AI workloads: https://www.solidigm.com/solutions/ar...
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: