i.sinister
i.sinister
Quite often I find myself manually restarting server with 'OmniSharpRestartServer' when there are changes made outside of current buffer. Without doing this autocompletion and code-checks do not work. Examples are:...
I'm having performance issues when sending large amount of data 'computed in c#' to ClickHouse. Similar problem was described in #60 and #61. Current version of ClickHouseConnection.PostStreamAsync expects Stream which...
Current 'bulk copy' API expects IEnumerable as an input. For performance reasons this API does not suit my needs - I have to insert large amount of "small" objects which...
Is is possible to add support for external data for the queries? According to [clickhouse documentation](https://clickhouse.tech/docs/en/single/index.html#external-data-for-query-processing]) HTTP api supports it: ``` $ cat /etc/passwd | sed 's/:/\t/g' > passwd.tsv $...
Currently sqlite3_stmt.db is declared as internal however using sqlite3_stmt requires sqlite3 instance to be available for error core checking after sqlite3_step, sqlite3_bind_XXX or sqlite3_reset calls. So in the application sqlite3...
### Issue description I have to save a large dataset with many groups each having 20M rows and currently it takes too much time. I think it would improve overall...
Without regular expressions multiple instances of the generated parser can be used concurrently. When regexps are used they are declared as 'static' but are in fact mutable objects, and _ParseRegexp...
Feature requst: extend DataColumn API to read column values directly into provided Span/Memory/Array
### Issue description I have a usecase where I need to read rather large parquet files - 5Gb-50Gb, 100 to 10000 groups with 1_000_000-20_000_000 rows in a group. Groups sizes...
I have to read large parquet files with 500M-1B records where one of the columns is a datetime. By convention for parquet timestamps are saved as Int96 UTC timestamp. When...
It looks like when [executable](https://clickhouse.com/docs/en/engines/table-functions/executable) is used ClickHouse always uses single thread to prepare the dataset. In my case data set has ~500M and requires sorting by 3 columns that...