AutoMQ FAQ

AutoMQ Diskless Kafka Questions Answered

Answers to common questions about AutoMQ's Diskless Kafka architecture, Kafka compatibility, deployment options, pricing, migration, customer stories, and support.

5 Questions

General

1

What is AutoMQ?

AutoMQ is a cloud-native streaming platform that is fully compatible with Apache Kafka. It implements a Diskless architecture that stores persistent stream data in S3-compatible object storage instead of depending on broker-local disks and inter-broker replica copying.

AutoMQ keeps the Kafka protocol and ecosystem compatibility that teams already rely on, while changing the storage architecture so brokers can be stateless, elastic, and easier to operate in cloud environments.

2

How does AutoMQ differ from traditional Apache Kafka?

Traditional Kafka follows a Shared Nothing architecture: each broker owns local storage and replicates data to other brokers. AutoMQ uses a Shared Storage architecture, where stateless brokers handle compute and network I/O while object storage provides durable, elastic storage.

This design enables:

  • Near-unlimited storage capacity through S3-compatible object storage
  • Seconds-level elasticity without large partition data movement
  • Zero cross-AZ traffic for Kafka replication paths in supported deployments
  • Reduced need for EBS over-provisioning
  • 100% Apache Kafka protocol compatibility for existing applications and tools
3

Is AutoMQ 100% Kafka-compatible? Will my existing clients and tools work?

Yes. AutoMQ implements the standard Kafka protocol, so producers, consumers, Kafka Connect, MirrorMaker, Flink, Spark, and common monitoring tools can work without application code changes. In most migrations, teams only need to point clients to the AutoMQ bootstrap endpoint.

4

What deployment options does AutoMQ offer?

AutoMQ offers two main commercial deployment models:

  • AutoMQ BYOC: A managed Kafka experience where AutoMQ helps provision, scale, and patch clusters while the control plane and data plane run inside your own cloud account.
  • AutoMQ Software: A self-managed option for platform teams that operate their own Kubernetes, private cloud, or data center infrastructure.

Both options use the same Diskless Kafka foundation and Kafka-compatible protocol layer.

5

Can AutoMQ run on AWS, Google Cloud, Azure, Oracle Cloud, or private infrastructure?

Yes. AutoMQ supports AWS, Google Cloud, Azure, Oracle Cloud, and private environments that provide S3-compatible object storage. For private or hybrid environments, teams commonly use object storage systems such as MinIO or Ceph.

5 Questions

Architecture And Performance

1

How does AutoMQ achieve Diskless Kafka on S3 with low latency?

AutoMQ separates compute from storage. Brokers are stateless, while durable stream data is written to S3-compatible object storage. To keep produce latency low, AutoMQ uses a WAL layer for the immediate durable write path, then flushes data to object storage asynchronously.

Production deployments can use low-latency WAL options such as EBS WAL, Regional EBS WAL, or NFS WAL. AutoMQ Open Source defaults to S3 WAL, which trades some latency for a simple object-storage-based setup.

2

Does "Diskless" mean I lose data if a broker crashes?

No. Diskless means brokers do not keep persistent state on local disks. Durable data is written to the WAL and then to object storage. If a broker fails, another broker can take over the partition mapping because the persistent data is not tied to the failed broker's local disk.

3

S3 usually has high latency. How does AutoMQ support real-time workloads?

AutoMQ does not write incoming messages directly into the final object storage layout on the hot path. Incoming messages first go through the WAL, then the system asynchronously writes stream objects to S3-compatible object storage.

For latency-sensitive workloads, AutoMQ commercial editions can use high-performance WAL options such as Regional EBS WAL or NFS WAL. AutoMQ Open Source with S3 WAL is simpler to deploy and is better suited to latency-tolerant workloads such as logging and monitoring.

4

How does AutoMQ handle scaling and capacity planning?

AutoMQ brokers are stateless, so capacity can be adjusted based on workload demand without moving full partitions of persistent data between brokers. Partition reassignment is primarily a metadata operation because durable data already lives in shared object storage.

This helps teams:

  • Scale out or in within seconds in supported environments
  • Reduce reserved broker and disk capacity for future spikes
  • Use Self-Balancing to distribute partitions and traffic
  • Avoid long data-copy operations during normal partition reassignment
5

Can AutoMQ be used for long-term data retention?

Yes. AutoMQ stores persistent stream data in S3-compatible object storage, which is designed for high durability and large-scale retention. This makes AutoMQ well suited for keeping Kafka data for days, weeks, months, or longer based on business and compliance requirements.

7 Questions

Pricing And Trial

1

How does AutoMQ pricing work?

AutoMQ pricing is designed around actual usage rather than pre-provisioned Kafka disk capacity. Customers pay cloud provider infrastructure costs directly to their cloud provider and pay AutoMQ service fees based on the selected plan and workload usage.

The main cost savings come from object storage, elastic stateless brokers, no partition fees, and reduced cross-AZ transfer costs in supported deployments.

2

Why is AutoMQ more cost-effective than traditional Kafka?

AutoMQ is built on object storage with a Diskless architecture, which can reduce compute, storage, and network costs compared with traditional Kafka deployments.

  • Storage: Object storage offers lower unit cost and pay-as-you-go flexibility.
  • Compute: Stateless brokers can scale with demand instead of being reserved for peak load.
  • Network: In cloud environments such as AWS, AutoMQ can avoid traditional cross-AZ replication traffic.
3

What assumptions are used in the pricing calculator?

The pricing calculator uses AWS US East (N. Virginia) as the reference region. EC2, S3, network, and MSK pricing are based on AWS public rates; actual costs may vary based on region, discounts, workload shape, and cloud account terms.

Calculator assumptions include:

  • Apache Kafka is configured with three replicas for data durability
  • Throughput metrics are based on server-side compressed data
  • Monthly usage is calculated as 730 hours
4

How are AutoMQ fees billed?

AutoMQ recommends subscribing through AWS, Google Cloud, or Azure Marketplace for streamlined billing. If marketplace payment is not available, customers can contact AutoMQ to arrange contract-based billing.

5

Does AutoMQ support pay-as-you-go and annual contracts?

Yes. AutoMQ supports both pay-as-you-go and annual contract billing. Annual contracts can include discounted rates based on workload size and commercial terms.

6

Is a free trial available?

Yes. AutoMQ offers a free trial with limited quota for new users. You can register through AutoMQ Cloud without a credit card.

7

Does AutoMQ charge for partitions or replication?

AutoMQ does not charge partition fees. AutoMQ also uses object storage for data durability, so it does not require traditional multi-replica partition storage in the same way disk-based Kafka deployments do.

4 Questions

AutoMQ Vs Apache Kafka

1

Can I migrate from self-managed Kafka to AutoMQ without downtime?

Yes. AutoMQ Linking for Kafka supports zero-downtime migration with offset preservation. Producers and consumers can be moved gradually, and consumers can resume from preserved offsets after the cutover.

2

How much can I save compared with self-managed Kafka?

Savings vary by workload, region, retention, traffic pattern, and cloud discounts. The largest savings usually come from replacing replicated local disks with object storage, reducing cross-AZ replication traffic, and scaling stateless brokers based on actual demand. Use the pricing calculator to model your workload.

3

Is AutoMQ suitable for mission-critical, low-latency use cases?

Yes, when deployed with the right WAL option and cluster configuration. AutoMQ supports low-latency real-time workloads, and published customer stories show AutoMQ running production pipelines for companies such as Grab, Geely, Honda, JD.com, LG U+, Poizon, and Tencent Cloud EMR.

4

What if I need Kafka on-premises or in a private cloud?

AutoMQ Software is designed for teams that run in Kubernetes, private cloud, or data center environments. It can use S3-compatible object storage such as MinIO or Ceph while keeping the Kafka-compatible API surface.

9 Questions

AutoMQ Vs Amazon MSK

1

What is the best way to run Apache Kafka on AWS?

AWS offers Amazon MSK, but MSK still requires teams to plan broker capacity, provision EBS storage, manage partition rebalancing, and account for cross-AZ networking costs. AutoMQ BYOC runs in your AWS account and VPC with a Diskless architecture that uses object storage, stateless brokers, and automatic scaling.

2

Is AutoMQ a managed Kafka cloud service?

Yes. AutoMQ BYOC is a managed Kafka experience deployed in your cloud account. AutoMQ helps manage scaling, rebalancing, patching, and storage while keeping the data plane in your environment.

3

How does AutoMQ compare to Amazon MSK on cost?

AutoMQ reduces major MSK cost drivers: EBS storage, cross-AZ replication traffic, and over-provisioned capacity. Actual savings depend on throughput, retention, partition count, region, and cloud discounts, so teams should model their own workload in the pricing calculator.

4

What are the hidden costs of running MSK?

Beyond broker instance hours, MSK costs can include:

  • Idle EBS capacity reserved for peak retention and future growth
  • Cross-AZ networking for producer traffic, broker replication, and consumer reads
  • Operational work for partition balancing, scaling, upgrades, and patching
  • Over-provisioned capacity because Kafka clusters are often sized for peak demand
5

How do I migrate from MSK to AutoMQ?

AutoMQ Linking for Kafka supports zero-downtime migration from MSK:

  1. Deploy AutoMQ alongside the existing MSK cluster.
  2. Create a linking task from the AutoMQ Console.
  3. Replicate topics and consumer group offsets.
  4. Move producers to AutoMQ.
  5. Move consumers to AutoMQ with offset preservation.
  6. Decommission MSK after validation.

The process does not require application code changes for Kafka-compatible workloads.

6

Does MSK Serverless solve scaling and operations issues?

MSK Serverless automates some infrastructure operations, but it still has service limits and shared-service tradeoffs. For high-throughput or latency-sensitive production workloads, teams should compare workload limits, cost behavior, scaling behavior, and operational control before choosing it.

7

How is AutoMQ different from MSK Express?

AutoMQ is built around a Diskless architecture where brokers are stateless and durable data is stored in object storage. MSK Express is an AWS-managed Kafka option with its own operational model and limits. AutoMQ also provides AutoMQ Linking for Kafka to support migration from existing Kafka or MSK clusters.

8

Can I just turn on MSK Tiered Storage?

MSK Tiered Storage can reduce some storage pressure, but it does not remove every disk-based Kafka operational concern. Teams still need to understand supported Kafka versions, topic constraints, local storage behavior, and how tiered storage affects reads, compaction, retention, and cost.

9

Is AutoMQ safe to run on AWS Spot Instances?

AutoMQ brokers are stateless, which makes them a better fit for Spot Instances than disk-bound brokers. If a Spot instance is reclaimed, a replacement broker can recover state from shared storage. Teams should still set conservative Spot policies and validate availability targets for production workloads.

5 Questions

AutoMQ Vs Confluent

1

Why can Confluent cost more than AutoMQ?

Confluent Cloud pricing can include managed service margins, partition-based charges, networking costs, and storage costs depending on cluster type and workload. AutoMQ BYOC runs in the customer's cloud account and uses object storage with stateless brokers, which gives teams more infrastructure transparency and cost control.

2

How much can I save by switching from Confluent to AutoMQ?

Savings depend on throughput, read fanout, partition count, retention, cloud region, and current Confluent plan. The biggest potential savings usually come from avoiding partition fees, reducing cross-AZ networking costs, using object storage, and running in a BYOC model.

3

What is the partition tax and why does AutoMQ not charge it?

Some Kafka services charge based on partition count, so teams pay more as they create more topics and partitions even when throughput is modest. AutoMQ does not charge partition fees, so partition count remains primarily an engineering and workload design choice.

4

Does AutoMQ offer true BYOC deployment?

Yes. AutoMQ BYOC runs in your own cloud account. Both the control plane and data plane are deployed in your environment, helping teams keep data, networking, and infrastructure under their account and security boundary.

5

How does AutoMQ migration compare with Confluent Cluster Linking?

AutoMQ Linking for Kafka is designed for zero-downtime migration with offset preservation from Kafka-compatible clusters. The right migration approach depends on the source cluster, data volume, downtime tolerance, and operational constraints, so AutoMQ experts can help plan the cutover.

5 Questions

Table Topic

1

What is Table Topic?

Table Topic is AutoMQ's built-in capability for materializing Kafka topics as query-ready lakehouse tables. It helps teams stream data into Apache Iceberg without building and operating separate ETL pipelines.

2

Does Table Topic support formats other than Iceberg?

Table Topic supports Apache Iceberg today. Additional lakehouse formats may be considered based on customer demand and product roadmap priorities.

3

How does Table Topic relate to Flink? Can it replace Flink?

Table Topic is a managed alternative to writing Flink jobs solely for Kafka-to-lake ingestion. Flink remains the right choice for complex stateful processing, joins, custom transformations, or event-time computation. Table Topic is best for simpler, reliable stream-to-table materialization.

4

Does enabling Table Topic affect existing Kafka consumers?

No. Existing Kafka producers and consumers continue using Kafka topics through the Kafka protocol. Table Topic runs alongside the streaming workload and materializes data into table storage for analytics.

5

What is the data freshness when sinking to Iceberg?

Data freshness depends on the configured commit interval and downstream table maintenance strategy. Shorter commit intervals improve freshness but can create more small files, so teams should balance freshness against compaction and query cost.

5 Questions

Managed Connector

1

Are existing Kafka Connect plugins supported?

Yes. AutoMQ Managed Connector is built on Apache Kafka Connect. Standard Kafka Connect plugins that run on platforms such as Confluent Platform or MSK Connect can usually run without code changes.

2

What if I need a connector plugin that is not provided by default?

You can upload custom connector plugins. As long as the plugin follows the standard Kafka Connect model and its runtime dependencies are available, AutoMQ can run it.

3

How are Managed Connectors deployed and scaled?

Managed Connectors run on Kubernetes and scale based on workload signals such as CPU, throughput, and connector task demand. This removes much of the manual worker sizing and deployment work from the user.

4

How is Managed Connector pricing structured?

Connectors and plugins are free to use. Customers pay for the compute resources consumed by connector workers, based on the selected deployment and billing model.

5

Can I monitor connector throughput, errors, and alerts?

Yes. Logs and metrics are built in. Metrics can also be exported to systems such as Prometheus, Datadog, or your existing observability stack for dashboards and alerting.

6 Questions

Kafka Agent

1

What is Kafka Agent?

Kafka Agent is an AI-powered operations assistant for Apache Kafka. It combines Kafka operational knowledge with LLM reasoning to help diagnose cluster issues, manage topics, analyze consumer lag, and identify problems through natural language conversation.

2

How does Kafka Agent differ from traditional monitoring tools?

Traditional monitoring tools show metrics and fire threshold-based alerts. Kafka Agent goes further by correlating metrics across brokers, consumers, and topics, identifying likely root causes, and suggesting concrete actions.

3

Is my data safe? Does Kafka Agent send data to the cloud?

Kafka Agent deploys in your private network. Kafka messages, metadata, and credentials stay in your infrastructure. LLM providers are configurable, so teams can use cloud LLM APIs with their own keys or local OpenAI-compatible models for private deployments.

4

What operations can Kafka Agent perform on my cluster?

Kafka Agent includes built-in tools organized by risk level. Read-only operations such as cluster overview, consumer lag analysis, and topic listing can run automatically. Mutating operations such as creating topics or changing configs require confirmation. High-risk operations such as deleting topics or resetting offsets require explicit confirmation and impact review.

5

Does Kafka Agent support multiple clusters?

Yes. A single Kafka Agent instance can manage multiple Kafka clusters with independent connection settings, credentials, and access policies.

6

What LLM providers does Kafka Agent support?

Kafka Agent supports cloud LLM providers and open-source models that expose an OpenAI-compatible API. Teams can configure provider priority and fallback based on privacy, availability, and cost requirements.

2 Questions

Customers And Support

1

Who are some AutoMQ customers and use cases?

AutoMQ publishes customer stories across mobility, gaming, e-commerce, telecom, cloud services, and observability. Examples include:

  • Grab: Reduced partition reassignment from hours to seconds and improved throughput efficiency
  • LG U+: Processes 2.2 billion daily messages with stateless Kafka architecture on AWS
  • Tencent Cloud EMR: Integrated AutoMQ as a first-party service with Iceberg table support
  • Poizon: Handles 40 GiB/s observability peaks with elastic scaling
  • Geely: Supports a connected vehicle platform for more than 10 million vehicles
  • Honda: Migrated with AutoMQ Linking for Kafka and reduced Kafka TCO
  • JD.com: Runs at 40 GiB/s scale with a reduced storage footprint
2

What support tiers and SLAs does AutoMQ offer?

AutoMQ offers Dev, Pro, and Enterprise tiers with different limits, availability targets, and support levels.

  • Dev: 20 MiB/sec max ingress throughput, 2,000 max partitions, 7 days max message retention, 99.50% SLA, community support
  • Pro: 1,024 MiB/sec max ingress throughput, 20,000 max partitions, 90 days max message retention, 99.95% SLA, business-hours support with 3-hour response time
  • Enterprise: Unlimited ingress throughput, unlimited partitions, unlimited retention, 99.99% SLA, 24/7 premium support with 1-hour response time

Enterprise can also include a dedicated support engineer and Multi-Region Cluster for disaster recovery.

4 Questions

Partner Program

1

How does the Referral Partner commission work?

Referral Partners earn commission for qualified deals they refer that close successfully. The partner introduces and registers the opportunity, while AutoMQ handles the sales process, technical evaluation, and customer onboarding.

2

How does the Reseller Partner discount structure work?

Reseller Partners receive discounted AutoMQ pricing for customer deals and can build a recurring revenue stream while managing the customer relationship directly.

3

Can I be both a Referral Partner and a Reseller Partner?

Yes. Partners can operate under both models. Some opportunities can be referred to AutoMQ, while others can be managed directly by the partner as a reseller.

4

How can partners transition from another data streaming vendor?

AutoMQ is Kafka-compatible, so existing Kafka expertise transfers directly. AutoMQ provides partner enablement materials, migration guidance, competitive positioning, and joint go-to-market support.

2 Questions

Getting Started

Get Started with AutoMQ Today

Start your 14-day free trial, No credit card required

Available on Cloud Marketplaces

Subscribe to AutoMQ directly from your preferred cloud platform

aws logo
gcp logo
azure logo
oracle logo