mshimizu-kx
mshimizu-kx
Hi Matt, This issue was fixed in the latest version (kafkakdb v2.0-alpha). Thanks,
Hi Matt, We are planning to release alpha version as soon as possible. Currently we are checking open issues and clearing resolved ones and at the same time checking if...
The partition is ignored at subscribe as described [here](https://docs.confluent.io/5.3.1/clients/librdkafka/rdkafka_8h.html#a0ebe15e9d0f39ccc84e9686f0fcf46f1). Maybe you can assign a new offset for the consumer with `.kfk.assignOffsets` or `.kafka.assignNewOffsetsToTopicPartition`. ``` .kafka.subscribe[consumer;topic1]; .kafka.subscribe[consumer;topic2]; .kafka.assignNewOffsetsToTopicPartition[consumer; ; (1#0i)!1#.kafka.OFFSET_END] each...
To me this happens regardless of executing `commit[]`. Even launching another `stale_con.q` stops original process to receive message. Is there any necessary config on producer side or broker?
The issue cannot be reproduced due to another problem. The steps are below: 1. Start a process producing on the topic `test1with examples/test_producer.q 2. start `stale_con.q` 3. start `other_cons.q` once...
This is done with a script: ``` ./kafka-topics.sh --bootstrap-server localhost:9092 --topic test1 --delete ``` Note: that "if you have consumers up and running is that the topic will get auto-created...
consume_start/stop is not feasible because index of a topic is not visible for consumer unless the topic is created in the same process. So the deleting the assignment might be...
H Srikar, I added the functionality for `.kafka.publishWithHeaders `. The example code shows how to specify the timestamp: ``` .kafka.publish[topic;.kafka.PARTITION_UA; "Hello from producer";""]; .kafka.publishWithHeaders[producer; .z.p; topic; .kafka.PARTITION_UA; "locusts"; ""; `header1`header2!("firmament";...