Jieyang Chen

Results 11 comments of Jieyang Chen

@wangwu1991 Thank you for creating this issue. Would you be able to provide us with the input data and parameters you used for compression?

@wangwu1991 Sorry about the late reply. We have fixed the problem in #203 and here is an example output of compressing "testfloat_8_8_128.dat". Also, please note that the dimensions should be...

@hengjiew Sorry about the late reply. `128*128*16*8 bytes` (~2MB) is a small dataset that can be hard to fully saturate the GPU to achieve high throughput. Usually, you will need...

@hengjiew Besides storing the compressed data, the returned data buffer also stores necessary information for decompressing the data. In the GPU parallel implementation, that information can be as large as...

@ben-e-whitney Ok, let me take a look at this commit first and figure out the possible solutions. I will let you know.

@ben-e-whitney I managed to fix this issue by calling the new compress/decompress API in `include/mgard-x/Lossless/CPU.hpp`. No major changes are necessary from your side. There is one minor change needed. Since...

@ben-e-whitney I noticed that the new lossless compress/decompress implementation has a lower throughput compared with the old implementation. Here are my results with the same data (134 million quantized elements)...

@ben-e-whitney Thanks for the reply. > Yes, I think that's fine. I'd even put all of CartesianProduct and CartesianProduct::iterator in the #ifndef \_\_NVCC__–#endif block. I think that might make any...

@dingwentao Hi Dingwen, thanks a lot! I will check out hipSZ.

@emeryberger Hi, thanks for creating CSranking! My department highly values the ranking provided by CSranking, especially in the HPC area, which is where my research focuses. However, my pull request...