Skip to Main Content

Client SDK Guide

AutoMQ is fully compatible with Apache Kafka and supports integration and message transmission using Apache Kafka client SDKs. This document introduces the recommended client SDKs for AutoMQ and provides usage notes.

Compatibility Notes

AutoMQ is fully compatible with Apache Kafka (versions 0.10 to 3.9). It is recommended to use the Apache Kafka client SDKs to access AutoMQ for sending and receiving messages. Following the protocol negotiation principles of Apache Kafka, client versions within the above range can be connected seamlessly.

It is advised for applications to upgrade to the latest SDK versions whenever possible to address any hidden defects in previous versions.

The Apache Kafka community provides multiple SDK options for different programming languages. Users can refer to the community's documentation. Drawing from extensive production experience, the AutoMQ technical team recommends the following SDKs for accessing AutoMQ with different programming languages.

Development Language
Client SDK
Recommended Version
Java
Apache Kafka Java Client
>= 3.2.0
C/C++
librdkafka
>= 2.8.0
Go
franz-go
>= 1.17.1
Python
kafka-python-ng
>= 2.2.3
NodeJS
KafkaJS
>= 2.2.4

If the SDK you are currently using is not among the recommended ones listed above, as long as it is compatible with the Apache Kafka protocol, you can continue to access AutoMQ. We suggest reviewing the known issues list below and adjusting parameters accordingly.

Client Parameter Tuning

When sending and receiving messages using the Apache Kafka SDK, it's crucial to adjust key parameters for Producers and Consumers, in addition to selecting the recommended SDK version, to achieve optimal performance.

It is recommended to refer to Kafka Client config tuning▸ for parameter inspection and adjustment.

Appendix: Other Known Issues

Sarama Go SDK

Known Defect
Defect Information
Solution
Async Producer has no memory usage limit during send retries
  • Defect Version: V1.43.3
  • Community Issue: https://github.com/IBM/sarama/issues/1358
  • Community PR: https://github.com/IBM/sarama/pull/3026
  • Phenomenon: When using async_producer to send messages and the corresponding Topic undergoes partition reassignment, the Producer's memory usage can increase abnormally, potentially leading to OOM.
  • Cause: When async_producer receives server-side retryable errors (like NOT_LEADER_OR_FOLLOWER due to partition reassignment), it temporarily stores the messages in memory and retries according to the policy. However, this retry cache has no size limit, and if substantial and continuous retries occur, memory usage may unexpectedly increase.
  • Upgrade to version >= V1.44.0

Kafka-Go SDK

Known Defect
Defect Information
Solution
Consumers Unable to Identify Group Coordinator Changes
  • Defect Version: V0.4.47
  • Issue: After partition reassignment of the __consumer_offsets topic, some groups experience consumption interruptions, with errors logged.
  • Cause: When the __consumer_offsets partitions are reassigned, the coordinator for some groups changes. Kafka-go does not correctly handle this situation, as it continues to send requests to the old coordinator following a change, resulting in NOT_COORDINATOR errors.
  • Temporarily restart the Consumer to recover.
  • Switch SDKs, and it is recommended to use the Franz-Go SDK as per the recommended versions in this document.

Kafka JS SDK

Known Defects
Defect Information
Resolution Methods
Inaccurate Consumer Configuration of heartbeatInterval
  • Defect Version: V2.2.4
  • Community Notes: https://github.com/tulios/kafkajs/issues/130#issuecomment-422024849
  • Issue: When the Consumer's configured heartbeatInterval is too close to sessionTimeout, the Consumer may periodically Leave & Rejoin the Group (though it generally does not affect consumption), logging errors.
  • Cause: In kafkajs, the configuration of heartbeatInterval only guarantees the "minimum interval between two heartbeats." This means that in certain scenarios (e.g., the Consumer reaches the end of the topic with no new messages to consume), the interval between the Consumer sending heartbeats to the Coordinator may exceed the heartbeatInterval; if set too large, it may exceed sessionTimeout, causing the Coordinator to evict the Consumer.
  • Lower the heartbeatInterval or increase the sessionTimeout (it is advised that heartbeat.interval.ms should not exceed 1/3 of session.timeout.ms).