Basile Deustua

Results 19 comments of Basile Deustua

Hi, i see that you remove the blocker label. Do you think the 1.0 version won't break after the implementation of the fix ?

+1 Same here, for S3 Bucket or any DB it will be amazing to have choice between create or import a resource if it exist. For retain resources, when the...

Any news about it? It would be very interesting to have a burrow module to expose kafka lag in timestamp. I think many of us had to deal with the...

you can find information about exactly-once semantic [here](https://docs.confluent.io/kafka-connect-s3-sink/current/overview.html#exactly-once-delivery-on-top-of-eventual-consistency). it can happen on rebalancing, error, timeout, etc... if you follow rule of exactly once semantic you can skip new version (it...

Yes, it's not really explain in the doc but when looking the code you can see that `flush.size` is compulsory i don't really know why, may be to be sure...

no you should have some informations in beans like `kafka.connect.sink-task-metrics.[connector_name].[task_number]`

i'm using datadog jmx scrapper and my bean regex is exactly that : ``` domain: 'kafka.connect' bean_regex: 'kafka\.connect:type=sink-task-metrics,connector=([-.\w]+),task=([0-9]+)' attribute: sink-record-active-count: metric_type: gauge alias: kafka.connect.sink_task.sink_record_active_count sink-record-read-rate: metric_type: gauge alias: kafka.connect.sink_task.sink_record_read_rate partition-count:...

Hi, i have the same issue with basic json data with 2 fields. Is it possible to convert JSON data to Parquet with Kafka Connect S3 ? I have tried...

you only need to change de "format.class" to "io.confluent.connect.s3.format.parquet.ParquetFormat" for a sink connector, the key/value.converter is for the deserialization of the kafka input data into your connector https://kafka.apache.org/documentation/#connectconfigs_value.converter the format...