Mark

Results 9 comments of Mark

According to Kafka Protocol it should submit uncommitted offsets before rejoining. This way it will not double process uncommitted offsets.

I solved that by handling batches myself. If on heartbeat I receive rebalance error - I stop fetching, wait to finish the job (make sure that your session interval bigger...

I use node:18 container. NestJS as a framework. I can't send you code to reproduce as this error I receive if I have any unhandled exception in my app.

I added this code it try catch where I report exceptions try { const { crypto, msCrypto } = getGlobalObject(); console.log('global.crypto || global.msCrypto', crypto, msCrypto); } catch (e) { console.log('global.crypto...

Here is result ``` global.crypto || global.msCrypto { getRandomValues: [Function: getRandomValues] } undefined randomUUID func undefined randomUUID crashed TypeError: crypto.randomUUID is not a function at KafkaConsumer.onBatch (/home/node/dist/apps/ethereum-indexer-sushiswap/main.js:1:181174) at async Runner.processEachBatch...

The last UUID is from node crypto module

Seems that Sentry SDK is not using NodeJS Crypto module but a polyfill

I am using just `node:18`. Weird thing that if I just launch container I can see full crypto module using `global.crypto` but when I can an exception in Kafka Consumer...

It has communication interface exposed right below the DC input