With a single click, enable AutoMQ Table Topic and seamlessly stream data into your Iceberg table, enabling continuous real-time analytics.
The built-in Kafka Schema Registry is immediately available. Table Topic utilizes registered schemas to automatically generate Iceberg Tables in your catalog service, like AWS Glue.
Traditional data lake ingestion methods frequently depend on intermediary tools such as Kafka Connect or Flink. Table Topic removes this ETL pipeline, drastically cutting costs and operational complexity.
AutoMQ's stateless and elastic architecture enables seamless scaling of brokers and dynamic reassignment of partitions. Table Topic fully utilizes this framework to effortlessly manage data ingestion rates ranging from hundreds of MiB/s to several GiB/s.
Table Topic seamlessly integrates with S3Tables, leveraging their catalog service and maintenance capabilities, including compaction, snapshot management, and unreferenced file removal. This integration also supports large-scale data analysis via AWS Athena.
AutoMQ enables seamless integration of database data through Change Data Capture (CDC), offering both stream and table formats. The stream format supports event-driven microservices applications like order processing, notification services, and transaction reconciliation.
The table format allows businesses to conduct direct analytics on database data within the data lake, facilitating use cases such as business intelligence (BI), market analysis, transaction analysis, and user behavior analysis.
AutoMQ Table Topic seamlessly integrates streaming and batch processing capabilities for diverse data types, including click streams, orders, financial data, IoT data, and logs. Once ingested, data can be processed in real-time using Flink or Spark Streaming to derive immediate value.
Additionally, it supports large-scale batch processing with engines such as Hive, Presto, Spark, and ClickHouse, facilitating comprehensive data analysis and insights.