Kafka Command Line Interface (CLI): Usage & Best Practices

March 8, 2025
AutoMQ Team
8 min read
Kafka Command Line Interface (CLI): Usage & Best Practices

The Kafka Command Line Interface (CLI) is an interactive shell environment that provides developers and administrators with a powerful set of tools to manage Apache Kafka resources programmatically. As the fastest and most efficient interface for interacting with a Kafka cluster, the CLI offers essential functionality for creating and configuring topics, producing and consuming messages, managing consumer groups, and monitoring cluster health. This comprehensive guide explores the Kafka CLI's capabilities, common use cases, configuration options, best practices, and troubleshooting approaches to help you effectively leverage this versatile toolset.

Understanding Kafka CLI Tools

Kafka CLI tools consist of various shell scripts located in the /bin directory of the Kafka distribution. These scripts provide a wide range of functionality for interacting with Kafka clusters, managing topics, producing and consuming messages, and handling administrative tasks. The CLI is particularly valuable for quick testing, troubleshooting, and automation without requiring code development.

Essential Kafka CLI Commands

The following table presents the most commonly used Kafka CLI commands organized by function:

Let's examine each of these categories in more detail with their specific usage patterns.

Topic Management Commands

Topic management is one of the most common uses of the Kafka CLI. Here are detailed commands for managing Kafka topics:

plaintext

# Create a topic with 3 partitions and replication factor of 1
bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 3 --replication-factor 1

# List all topics in the cluster
bin/kafka-topics.sh --bootstrap-server localhost:9092 --list

# Describe a specific topic
bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic my-topic

# Add partitions to an existing topic
bin/kafka-topics.sh --bootstrap-server localhost:9092 --alter --topic my-topic --partitions 6

# Delete a topic (if delete.topic.enable=true)
bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic my-topic

These commands allow administrators to create, monitor, modify, and remove topics as needed.

Producer and Consumer Commands

The CLI provides tools for producing messages to topics and consuming messages from topics:

plaintext

# Start a console producer to send messages to a topic
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic my-topic

# Start a console consumer to read messages from a topic
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic

# Consume messages from the beginning of a topic
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning

# Consume messages as part of a consumer group
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --group my-group

These commands enable interactive testing of message production and consumption, which is valuable for debugging and verification.

Consumer Group Management

Consumer groups can be managed and monitored using these commands:

plaintext

# List all consumer groups
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list

# Describe a consumer group (shows partitions, offsets, lag)
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group

# Reset offsets for a consumer group
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --reset-offsets --group my-group --topic my-topic --to-earliest --execute

# Delete a consumer group
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group my-group

These commands help in monitoring consumer progress, diagnosing performance issues, and managing consumer offsets.

Common Use Cases for Kafka CLI

The Kafka CLI serves several important use cases that make it an essential tool for Kafka administrators and developers.

Testing and Verification

The CLI is ideal for quickly testing Kafka cluster functionality. For example, you can verify that messages can be successfully produced and consumed:

plaintext

# Terminal 1: Start a consumer
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test-topic

# Terminal 2: Produce test messages
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test-topic

Data Backfilling

When you need to import historical data into Kafka, the console producer can read data from files:

plaintext

# Import data from a file to a Kafka topic
cat data.json | bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic my-topic

This approach is useful for one-time data imports or testing with sample datasets.

Shell Scripting and Automation

The Kafka CLI can be incorporated into shell scripts to automate operations, such as monitoring logs or performing scheduled administrative tasks. For example:

plaintext

#!/bin/bash
while true
do
  sleep 60
  new_checksum=$(md5sum $LOGFILE | awk '{ print $1 }')
  if [ "$new_checksum" != "$checksum" ]; then
    # Produce the updated log to the security log topic
    kafka-console-producer --topic full-security-log --bootstrap-server localhost:9092 < security_events.log
  fi
done

This makes it easy to incorporate Kafka operations into broader automation workflows.

Configuration and Setup

Installation and Basic Setup

To use Kafka CLI tools, you need to have Apache Kafka installed:

  1. Download Kafka from the Apache Kafka website

  2. Extract the downloaded file: tar -xzf kafka_2.13-3.1.0.tgz

  3. Navigate to the Kafka directory: cd kafka_2.13-3.1.0

  4. Set up environment variables (optional but recommended):

plaintext

export KAFKA_HOME=/path/to/kafka
export PATH=$PATH:$KAFKA_HOME/bin

Starting the Kafka Environment

For a basic development environment, you need to start ZooKeeper (if using ZooKeeper mode) and then Kafka:

plaintext

# Start ZooKeeper (if using ZooKeeper mode)
bin/zookeeper-server-start.sh config/zookeeper.properties

# Start Kafka
bin/kafka-server-start.sh config/server.properties

Secure Connections

For secure Kafka clusters, additional configuration is needed. Common authentication methods include:

SASL Authentication

plaintext

bin/kafka-topics.sh --bootstrap-server kafka:9092 --command-config client.properties --list

Where client.properties contains:

plaintext

security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="password";

SSL Configuration

plaintext

bin/kafka-console-producer.sh --bootstrap-server kafka:9093 --producer.config client-ssl.properties --topic my-topic

These security configurations ensure that CLI tools can connect to secured Kafka clusters.

Best Practices for Kafka CLI

General Best Practices

  1. Use scripts for repetitive tasks : Create shell scripts for common operations to ensure consistency.

  2. Set default configurations : Use configuration files with the -command-config parameter to avoid typing the same options repeatedly.

  3. Test in development first : Always test commands in a development environment before executing in production.

  4. Document commands : Maintain documentation of frequently used commands and their parameters.

Production Environment Considerations

  1. Limit direct access : Restrict access to production Kafka CLI tools to authorized administrators only.

  2. Use read-only operations : Prefer read-only operations (like -describe and -list ) when possible.

  3. Double-check destructive commands : Carefully verify commands that modify or delete data before executing them.

  4. Handle encoded messages carefully : When working with encoded messages, ensure consumers use the same schema as producers.

Performance Optimization

  1. Batch operations : When possible, batch related operations to minimize connections to the Kafka cluster.

  2. Be careful with -from-beginning : Avoid using this flag on large topics as it may overload the system.

  3. Use specific partitions : When debugging, specify partitions directly to limit the amount of data processed.

  4. Monitor resource usage : Keep an eye on CPU and memory usage when running resource-intensive CLI commands.

Troubleshooting Common Issues

When working with Kafka CLI, you may encounter various issues. Here are some common problems and their solutions:

Broker Connectivity Issues

Problem : Unable to connect to Kafka brokers

Solutions :

  • Verify that broker addresses in -bootstrap-server are correct

  • Check network connectivity and firewall rules

  • Ensure the Kafka brokers are running

  • Verify that security configuration matches broker settings

Topic Management Issues

Problem : Topic creation failing

Solutions :

  • Check if the Kafka cluster has sufficient resources

  • Verify that topic configuration is valid

  • Ensure you have necessary permissions

  • Check if a topic with the same name already exists

Consumer Group Issues

Problem : Consumer group not working properly

Solutions :

  • Use kafka-consumer-groups.sh to verify current status

  • Check consumer configurations

  • Verify permissions for the consumer group

  • Ensure the topic exists and has messages

Conclusion

The Kafka CLI provides a powerful and efficient way to interact with Kafka clusters, offering essential functionality for developers and administrators. By understanding the available commands, following best practices, and knowing how to troubleshoot common issues, you can effectively leverage the CLI for various Kafka operations.

For simple tasks and administrative operations, the CLI remains the fastest and most direct approach. For more complex scenarios or when a graphical interface is preferred, alternative tools like Conduktor, Redpanda Console, or Confluent Control Center can complement the CLI experience.

Whether you're testing a new Kafka setup, troubleshooting issues, or automating operations, mastering the Kafka CLI is essential for anyone working with Kafka in development or production environments.

If you find this content helpful, you might also be interested in our product AutoMQ. AutoMQ is a cloud-native alternative to Kafka by decoupling durability to S3 and EBS. 10x Cost-Effective. No Cross-AZ Traffic Cost. Autoscale in seconds. Single-digit ms latency. AutoMQ now is source code available on github. Big Companies Worldwide are Using AutoMQ. Check the following case studies to learn more:

Newsletter

Subscribe for the latest on cloud-native streaming data infrastructure, product launches, technical insights, and efficiency optimizations from the AutoMQ team.

Join developers worldwide who leverage AutoMQ's Apache 2.0 licensed platform to simplify streaming data infra. No spam, just actionable content.

I'm not a robot
reCAPTCHA

Never submit confidential or sensitive data (API keys, passwords, credit card numbers, or personal identification information) through this form.