bulker icon indicating copy to clipboard operation
bulker copied to clipboard

Configuring Jitsu Bulker for Multi-Partition Kafka Topics

Open ZiyaadQasem opened this issue 1 year ago • 3 comments

In Kubernetes deployment of Jitsu, the Bulker component is responsible for batching events and sending them to a ClickHouse instance. Currently, the Kafka topic that Bulker creates and consumes is configured with only one partition.

Challenges Encountered:

  • Partition Limitation: The single-partition setup leads to performance bottlenecks and limits scalability.
  • Data Rebalancing Issues: Attempting to manually increase the number of partitions on the existing Kafka topic results in data rebalancing problems, which can disrupt the data flow and processing.

Questions:

  • How can I instruct Jitsu Bulker to create Kafka topics with multiple partitions during their initial creation?

  • Are there specific configuration settings or parameters within Jitsu or Bulker that allow specifying the desired number of partitions for Kafka topics?

ZiyaadQasem avatar Aug 23 '24 11:08 ZiyaadQasem

We discussed it for a while internally and decided not to implement parallel processing at the moment. For data streams with enabled deduplication, parallel processing can break it. E.g. if two consumers will run MERGE statements in parallel, most databases won't guarantee the correctness.

For non-deduped streams it can give you a performance boost, but most of the use-cases we see require deduplication.

If we ever decide to go forward with this issue, here's what we would do:

  • Allow multiple partitions only for non-dedup streams; or
  • Run MERGE sequentially using a cluster-wide lock

Meanwhile, I suggest to implement parallelization with having different destinations per each table

vklimontovich avatar Aug 26 '24 16:08 vklimontovich

Meanwhile, I suggest to implement parallelization with having different destinations per each table

Actually, topics are created per table so we have that kind of parallelism.

To workaround current limitations, you can duplicate the destination and connection, then rotate writeKeys on client side or split traffic using JavaScript function. Deduplication may still work unreliably in this scenario.

absorbb avatar Aug 26 '24 19:08 absorbb

Meanwhile, I suggest to implement parallelization with having different destinations per each table

Actually, topics are created per table so we have that kind of parallelism.

To workaround current limitations, you can duplicate the destination and connection, then rotate writeKeys on client side or split traffic using JavaScript function. Deduplication may still work unreliably in this scenario.

kinda of true. Yet, when you have small table and other massive table. I don't think its good idea to treat both of them the same way. Deduplication is understandable. But by have specific keys in KAFKA message. Will ensure the queuing of the messages ?? if I understood you correctly

yalattas avatar Apr 29 '25 20:04 yalattas