DevOps Q&A: AI Workflows, Kubernetes Cost Optimization, and MCP Servers
Автор: DevOps & AI Toolkit
Загружено: 2026-01-08
Просмотров: 975
In this AMA session, Viktor and Scott dive into a wide range of topics spanning AI workflows, Kubernetes operations, and platform engineering. They share their personal approaches to prompt engineering and context management when working with AI coding assistants, emphasizing the importance of keeping tasks small and managing context windows effectively to avoid hallucinations. The discussion explores how AI is transforming operations work, with particular focus on bridging knowledge gaps for traditional infrastructure teams and the challenges of feeding the right data to AI systems.
The conversation covers practical DevOps concerns, including real-time alerting strategies in Kubernetes, comparing push-based OpenTelemetry approaches with pull-based Prometheus models. Viktor and Scott also discuss Kubernetes cost optimization, recommending starting with node autoscaling tools like Karpenter before tackling workload right-sizing. They weigh in on building Kubernetes operators, strongly advocating for tools like Crossplane over custom operators when possible, and share thoughts on the current AI landscape, acknowledging a bubble while emphasizing that AI remains genuinely useful when implemented correctly with proper context engineering.
▬▬▬▬▬▬ ⏱ Timecodes ⏱ ▬▬▬▬▬▬
00:00 Intro (skip to first question)
06:29 What's your AI workflow with specs and tools?
18:25 Who should own DevEx in an organization?
21:21 Real-time alerting approach for Kubernetes workloads
29:07 Which part of ops gains most from AI?
40:40 Where to start writing a Kubernetes operator?
46:26 Low-hanging fruit for Kubernetes cost optimization
51:24 Changes in Upbound plans explained
51:57 Is there an AI bubble?
59:04 Crossplane composites vs Flux resource sets
1:01:35 Using on-prem open models vs cloud LLMs for PRs
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: