clickhouse-java icon indicating copy to clipboard operation
clickhouse-java copied to clipboard

ClickHouse reads 40 million data and writes it into MySQL. When 35 million data is read, an error is reported.

Open T-M-L-C opened this issue 1 year ago • 17 comments

clickhouse-jdbc: version(0.4.6) error message: java.sql.SQLException: java.io.StreamCorruptedException: Reached end of input stream after reading 6719 of 10487 bytes clickhouse: version(23.3.2.1) error message: Code: 24.DB::Exception: Cannot write to ostream at offset 1082130432: While executing ParallelFormattingOutputFormat.(CANNOT_WRITE_TO_OSTREAM)

T-M-L-C avatar Feb 23 '24 13:02 T-M-L-C

image

T-M-L-C avatar Feb 23 '24 14:02 T-M-L-C

Good day, @T-M-L-C ! Is the problem reproducible? Is it on the latest version, too?

Thanks in advance!

chernser avatar Jun 18 '24 19:06 chernser

There is a similar issue, but it is intermittent issue, occurs mostly when client side is under pressure (several statements)

ClickHouse:

Code: 24. DB::Exception: Cannot write to ostream at offset 7340032: While executing ParallelFormattingOutputFormat. (CANNOT_WRITE_TO_OSTREAM) (version 23.8.14.6 (official build))

Client side:

Error reading from db: java.io.StreamCorruptedException: Reached end of input stream after reading 70390 of 93625 bytes

Any thought what could be the reason? And how to avoid such failures?

TimonSP avatar Jul 23 '24 10:07 TimonSP

@TimonSP it may be linked to the CH issue https://github.com/ClickHouse/clickhouse-private/issues/8134 What are your client / server versions?

chernser avatar Jul 23 '24 17:07 chernser

@chernser, sorry, link is not work for me (Page not found)

ClickHouse version 23.8.14.6 clickhouse-java version 0.4.6

TimonSP avatar Jul 23 '24 18:07 TimonSP

I'm seeing this same issue using: Clickhouse 23.9.1.1854 clickhouse-java version 0.6.3

Similar to @TimonSP we see this intermittently, but frequently. We are not able to reproduce on demand, but we could turn on any debug logging that might be helpful. It always occurs when reading large amounts of records from Clickhouse (> 10M). We see this both when using the Clickhouse JDBC driver and when using the native HTTP client.

One guess I had was that this could be related to TCP send/recv buffers. Is this a plausible explanation? How does Clickhouse handle the TCP send buffer being full?

Also, the link to the issue does not work for me either. I think this is because it's in the "clickhouse-private" repo. Can you give some more information about what this issue references?

wallacms avatar Aug 21 '24 22:08 wallacms