Arraymancer icon indicating copy to clipboard operation
Arraymancer copied to clipboard

Sparse tensor and Sparse CudaTensor

Open mratsim opened this issue 8 years ago • 3 comments

Sparse tensor support is important in general machine learning especially for to store matrices of one-hot encoded vectors.

For CUDA backend, CuBLAS has a Sparse API. For CPU, further investigation is needed to find a suitable Sparse BLAS backend. See this link for potential libraries.

mratsim avatar Sep 09 '17 12:09 mratsim

Progress on the research:

The main challenge is finding an up-to-date Sparse BLAS library.

Survey of the field:

  • http://www.netlib.org/utk/people/JackDongarra/la-sw.html

Updated:

  • MKL
  • Scipy but not parallel.

Seems state of the art:

  • ViennaCL via their OpenMP backend is much! faster than MKL, see SpMM benchmark. Work on CSR matrices. C++.
  • librsb and Julia wrapper. Custom Matrix Format. C/Fortran.

MPI / Distributed:

Full sparse tensor library:

Seems staled:

Paper:

mratsim avatar Oct 24 '17 23:10 mratsim

"BlockSparse" optimized GPU kernels by OpenAI:

mratsim avatar Aug 24 '18 12:08 mratsim

The TACO tensor compiler generates efficient dense, sparse, and block sparse kernels for various formats. It is worth checking out.

chenpeizhi avatar Feb 14 '20 17:02 chenpeizhi