Jerome
Jerome
Hi Omer, Just wanted to add that the time lag is due to the loops in bcor.py. In finemapper.py the code requests to correlations (readcorr([])) for all the SNPs in...
@omerwe and @jdblischak I found that calling `ldstore2 `to read the `.bcor` file and output the LD matrix into a text file then reading that back via `np.loadtxt()` takes far...
I did some more tweaking with the memory and got a new exception ``` --------------------------------------------------------------------------- Py4JError Traceback (most recent call last) Input In [3], in () 1 ## Import scores...
@ritchie46 I am not necessarily benchmarking. And if it is indeed pyarrow then polars is getting stuck at some point getting a response from pyarrow library. Unfortunately I am not...
Okay I can confirm that the issue is with the rust native implementation of parquet reader. pandas takes### 79.96063661575317 seconds to read the file polars takes ### 50.31999969482422 seconds to...
@nameexhaustion Thanks for this essentially my parquet file has ~4 million records 8 columns are long strings and it is easy to see even with 40,000 records the performance lags...
@not522 thanks for looking into it and yes sqlalchemy does support duckdb but I got errors in API call. duckdb is same convention as sqlite but allows for seamless integration...
+1 on this Score -> grad np.linalg.solve fails because 2.0 Changed in version 2.0: The b array is only treated as a shape (M,) column vector if it is exactly...
@sharedw not really I am working with binary classification so the Fisher Information matrix is a scalar and calculating the natural gradient reduces to grad/metric. This should hold for other...