ClickHouse reads 40 million data and writes it into MySQL. When 35 million data is read, an error is reported.
clickhouse-jdbc: version(0.4.6) error message: java.sql.SQLException: java.io.StreamCorruptedException: Reached end of input stream after reading 6719 of 10487 bytes clickhouse: version(23.3.2.1) error message: Code: 24.DB::Exception: Cannot write to ostream at offset 1082130432: While executing ParallelFormattingOutputFormat.(CANNOT_WRITE_TO_OSTREAM)
Good day, @T-M-L-C ! Is the problem reproducible? Is it on the latest version, too?
Thanks in advance!
There is a similar issue, but it is intermittent issue, occurs mostly when client side is under pressure (several statements)
ClickHouse:
Code: 24. DB::Exception: Cannot write to ostream at offset 7340032: While executing ParallelFormattingOutputFormat. (CANNOT_WRITE_TO_OSTREAM) (version 23.8.14.6 (official build))
Client side:
Error reading from db: java.io.StreamCorruptedException: Reached end of input stream after reading 70390 of 93625 bytes
Any thought what could be the reason? And how to avoid such failures?
@TimonSP it may be linked to the CH issue https://github.com/ClickHouse/clickhouse-private/issues/8134 What are your client / server versions?
@chernser, sorry, link is not work for me (Page not found)
ClickHouse version 23.8.14.6 clickhouse-java version 0.4.6
I'm seeing this same issue using: Clickhouse 23.9.1.1854 clickhouse-java version 0.6.3
Similar to @TimonSP we see this intermittently, but frequently. We are not able to reproduce on demand, but we could turn on any debug logging that might be helpful. It always occurs when reading large amounts of records from Clickhouse (> 10M). We see this both when using the Clickhouse JDBC driver and when using the native HTTP client.
One guess I had was that this could be related to TCP send/recv buffers. Is this a plausible explanation? How does Clickhouse handle the TCP send buffer being full?
Also, the link to the issue does not work for me either. I think this is because it's in the "clickhouse-private" repo. Can you give some more information about what this issue references?