Skip to main content

Preface

Enterprises’ thirst for real-time data is driving profound changes in modern data architecture. However, building efficient data streaming platforms faces two major bottlenecks: the severe cost and operational challenges of traditional Apache Kafka in cloud environments, and the architectural complexity or restrictive AGPL licensing of existing object storage solutions. These hurdles create significant barriers to cost-effective, scalable, and commercially viable real-time data processing. To address these challenges, AutoMQ and RustFS have formed a strategic partnership. This collaboration deeply integrates AutoMQ’s cloud-native, 100% Kafka-compatible stream processing with RustFS, a high-performance, Apache 2.0 licensed distributed object storage. By combining these two technologies, enterprises gain access to a next-generation Diskless Kafka platform that delivers a superior architecture, lower TCO, and complete avoidance of licensing risks.

Hands-On Guide: Deploying AutoMQ with RustFS in 4 Steps

Precondition

Before starting, make sure you have Docker and Docker Compose installed and running on your system. This guide uses Docker Compose to deploy a decoupled AutoMQ and RustFS stack. We’ll solve the two main challenges: service startup dependency and automatic S3 bucket initialization.

Deployment Instructions

With the precondition met, we’ll now build the stack. This process is broken down into the key phases of the deployment: first, defining all services in the complete docker-compose.yaml, then analyzing how they’re orchestrated, and finally, validating the end-to-end data flow.
  1. The S3 Backend-RustFS: This service acts as our S3-compatible storage.
    • healthcheck : This block is crucial. It tells Docker not to just assume the service is “on,” but to actively check its /health endpoint. Only when this check passes is the service considered “ready” to handle S3 requests.
  2. The Bucket Initializer-mc-init: This is a small utility container whose only job is to create our S3 buckets.
    • depends_on: service_healthy : This line is the key to our automation. It explicitly tells Docker Compose: “Do not even start this mc-init container until the rustfs service’s healthcheck is passing.” This prevents errors where the script tries to create buckets on a service that isn’t ready.
  3. The AutoMQ Service-server1: This is the main Kafka broker service.
    • depends_on : This service depends on both rustfs and mc-init being Up. This guarantees that when AutoMQ starts, the S3 backend is ready and the mc-init script has already finished creating the required buckets.
    • Endpoint Override : The command block’s —override flags (e.g., endpoint=http://rustfs:9000) are critical. They redirect AutoMQ’s S3 requests away from the public internet and toward our own rustfs container within the Docker network.

Configuration

This entire stack—all three services and their dependencies—is defined in a single docker-compose.yaml file. Create this file now and paste the complete configuration block below into it.

services:
  rustfs:
    image: rustfs/rustfs:latest
    container_name: rustfs-server
    ports:
      - "9000:9000"
      - "9001:9001"  
    environment:
      - RUSTFS_VOLUMES=/data
      - RUSTFS_ADDRESS=0.0.0.0:9000
      - RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
      - RUSTFS_CONSOLE_ENABLE=true
      - RUSTFS_ACCESS_KEY=rustfsadmin
      - RUSTFS_SECRET_KEY=rustfsadmin
    volumes:
      - rustfs_data:/data
    networks:
      - automq_net
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "sh", "-c", "curl -f http://localhost:9000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  # 1.5. MC (MinIO Client)
  mc-init:
    container_name: mc-init-rustfs
    image: minio/mc:latest
    depends_on:
      rustfs:
        condition: service_healthy
    networks:
      - automq_net
    entrypoint: >
      /bin/sh -c "
      echo 'Waiting for RustFS service...';
      /usr/bin/mc alias set myrustfs http://rustfs:9000 rustfsadmin rustfsadmin;
      echo 'RustFS is ready. Creating buckets...';
      /usr/bin/mc mb myrustfs/automq-data;
      /usr/bin/mc mb myrustfs/automq-ops;
      echo 'Buckets created successfully. Keeping container alive.';
      tail -f /dev/null
      "


  server1:
    container_name: "automq-server1"
    image: automqinc/automq:1.6.0
    stop_grace_period: 1m
    environment:
      - KAFKA_S3_ACCESS_KEY=rustfsadmin
      - KAFKA_S3_SECRET_KEY=rustfsadmin
      - AWS_ACCESS_KEY_ID=rustfsadmin
      - AWS_SECRET_ACCESS_KEY=rustfsadmin
      - KAFKA_HEAP_OPTS=-Xms1g -Xmx4g -XX:MetaspaceSize=96m -XX:MaxDirectMemorySize=1G
      - CLUSTER_ID=3D4fXN-yS1-vsQ8aJ_q4Mg
    command:
      - bash
      - -c
      - |
        /opt/automq/kafka/bin/kafka-server-start.sh \
        /opt/automq/kafka/config/kraft/server.properties \
        --override cluster.id=$$CLUSTER_ID \
        --override node.id=0 \
        --override controller.quorum.voters=0@server1:9093 \
        --override controller.quorum.bootstrap.servers=server1:9093 \
        --override advertised.listeners=PLAINTEXT://server1:9092 \
        --override s3.data.buckets='0@s3://automq-data?region=us-east-1&endpoint=http://rustfs:9000&pathStyle=true' \
        --override s3.ops.buckets='1@s3://automq-ops?region=us-east-1&endpoint=http://rustfs:9000&pathStyle=true' \
        --override s3.wal.path='0@s3://automq-data?region=us-east-1&endpoint=http://rustfs:9000&pathStyle=true'
    networks:
      - automq_net
    depends_on:
      - rustfs
      - mc-init

networks:
  automq_net:
    driver: bridge

volumes:
  rustfs_data:

Validate Deployment

Time to see it all work. With your docker-compose.yaml file saved, launch the stack. We will then verify that the three services defined in your YAML have orchestrated correctly. The test will use the standard Kafka bin scripts, found inside the automq-server1 container, to create a topic and send messages, proving that AutoMQ is successfully reading and writing data to the RustFS backend.

4.1 Launch and Check Status


# Start all services
docker compose up -d

# Check the status  to see if everything is running
docker compose ps

Expected Result: The STATUS for rustfs-server should include (healthy), and all three services should show as Up.

NAME             IMAGE                    COMMAND                  SERVICE   CREATED         STATUS                   PORTS
automq-server1   automqinc/automq:1.6.0   "bash -c '/opt/autom…"   server1   ...             Up ...
mc-init-rustfs   minio/mc:latest          "/bin/sh -c ' echo '…"   mc-init   ...             Up ...
rustfs-server    rustfs/rustfs:latest     "/entrypoint.sh rust…"   rustfs    ...             Up ... (healthy)         ...

4.2 Create a Topic

This step proves AutoMQ can successfully write to the RustFS backend by creating a Kafka topic.

docker exec -it automq-server1 bash
/opt/automq/kafka/bin/kafka-topics.sh --create --topic test-topic --bootstrap-server server1:9092 --partitions 1 --replication-factor 1

Expected Result: You will see the message Created topic test-topic.

4.3 Produce and Consume

Finally, let’s confirm the full data flow by sending and receiving a message. This requires two terminals. Terminal 1 (Consumer):

/opt/automq/kafka/bin/kafka-console-consumer.sh --bootstrap-server server1:9092 --topic test-topic

Terminal 2 (Producer):

docker exec -it automq-server1 bash
/opt/automq/kafka/bin/kafka-console-producer.sh --broker-list server1:9092 --topic test-topic
> Hello, AutoMQ and RustFS!

Expected Result: The message Hello, AutoMQ and RustFS! will instantly appear in Terminal 1. This confirms the entire data flow—from producer to S3 storage and back to the consumer—is fully operational.

Future Outlook

The integration of AutoMQ with RustFS demonstrates significant advantages over using standard object storage or complex traditional systems. RustFS provides a high-performance, stable storage layer, delivering high read/write throughput with stable memory usage, which allows AutoMQ to maintain its P99 low latency targets. Additionally, its lightweight, metadata-free architecture and Apache-2.0 license drastically reduce operational complexity and eliminate commercial compliance risks. Ultimately, this partnership highlights a compelling synergy. AutoMQ provides a modern, elastic, and cost-effective architecture for Kafka by making brokers stateless and leveraging compute-storage separation. When paired with RustFS, which provides a high-performance, license-friendly, and easy-to-operate S3-compatible backend, the result is a highly reliable, scalable, and resource-efficient streaming platform. This combination is engineered to meet modern data demands, delivering both architectural elasticity (at the compute layer) and high-performance, compliant persistence (at the storage layer).