Hands-On with Confluent Cloud: Apache Kafka®, Apache Flink®, and Tableflow
Автор: Viktor Gamov
Загружено: 2025-09-18
Просмотров: 240
Confluent Cloud is a fully managed platform for Apache Kafka®, designed to simplify real-time data streaming and processing. It integrates Kafka for data ingestion, Apache Flink® for stream processing, and Tableflow for converting streaming data into analytics-ready Apache Iceberg® tables. DuckDB, a lightweight analytical database, supports querying these Iceberg tables, making it an ideal tool for the workshop’s analytics component. The workshop is designed for developers with basic programming knowledge, potentially new to Kafka, Flink, or Tableflow, and aims to provide hands-on experience within a condensed time frame.
Workshop Overview
This 2-hour hands-on workshop introduces developers to building real-time data pipelines using Confluent Cloud. You’ll learn to stream data with Apache Kafka, process it in real-time with Apache Flink, and convert it into Apache Iceberg tables using Tableflow. The workshop assumes basic familiarity with programming and provides step-by-step guidance.
Prerequisites: Before you arrive
To make sure you can get hands on during this workshop, please make sure the following are installed on your system!
VSCode with Confluent Extension: For managing Confluent Cloud resources.
Confluent CLI: To interact with Kafka clusters and topics.
JDK 17: Required for Flink development.
Python 3: For producing messages to Kafka.
DuckDB: For querying Tableflow Iceberg tables.
What You’ll Learn
Set up a Kafka cluster and manage topics in Confluent Cloud.
Write and run a Flink job to process streaming data.
Use Tableflow to materialize Kafka topics as Iceberg tables and query them with DuckDB.
Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: