Better timeout indication on streaming json rpc requests
Some JSON RPC requests are streamed, lazy evaluated (eth_getLogs for example). If the response becomes too large it is send through multiple http responses in chunks. If the timeout occurs it just abruptly stops sending data.
It would be good to handle timeouts better (http codes?) for those scenarios. But I am not sure if its possible.
For reference:
I am querying the CirclesUBI Hub contract on the xDai Mainnet for all "TRUST" events (contract address: 0x29b9a7fBb8995b2423a71cC17cf9810798F6C543), the code is (in Java with web3j):
EthFilter eventFilter = new EthFilter(DefaultBlockParameter.valueOf(index), DefaultBlockParameter.valueOf(currentBlock), hub.getContractAddress());
String encodedEventSignature = EventEncoder.encode(Hub.TRUST_EVENT);
and I am going from the deploy block (about 12000000) to LATEST (currently around 19000000) and expect around 1mio events, so "a lot".
Without high enough Timeout, I get back different indeterministic number of events, but no indicator that events are missing.
@ice09 recommended solution is to split this filter into multiple by using some block numbers ranges.
For example in the merge when consensus client syncs deposit contract it uses block ranges of 1000.
Yes, thanks, that's what I did (for a on-site running node I used 10000) and it works very well. Still the behaviour could be more user/developer-friendly.
we will still look into it
we will buffer eth_getLogs for now
Wasn't that already addressed? I recall we did something in that area.
@LukaszRozmej @rjnrohit
Are you referring to this issue: https://github.com/NethermindEth/nethermind/issues/7064 @kamilchodola ?
Yeah probably i recall this one @rjnrohit