Reason Duan

Results 4 issues of Reason Duan

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception. at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:609) at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329) at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232) at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201) at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182) at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused...

Cluster模式,在查询页面进行Scan时无法获取到数据。 ``` List redisMasterNodeList = getRedisMasterNodeList(cluster); int masterSize = redisMasterNodeList.size(); int count = masterSize < 10 ? 100 / masterSize : 10; autoCommandParam.setCount(count); Set result = new LinkedHashSet(); redisMasterNodeList.forEach(masterNode ->...

bug

从3.0.6到5.0.5迁移数据,是否可以实现?

**例行检查** [//]: # (方框内删除已有的空格,填 x 号) + [ ] 我已确认目前没有类似 issue + [ ] 我已确认我已升级到最新版本 + [ ] 我已完整查看过项目 README,已确定现有版本无法满足需求 + [ ] 我理解并愿意跟进此 issue,协助测试和提供反馈 + [ ] 我理解并认可上述内容,并理解项目维护者精力有限,**不遵循规则的 issue...

enhancement