bitsandbytes-rocm icon indicating copy to clipboard operation
bitsandbytes-rocm copied to clipboard

Int8 Matmul not supported on gfx1030?

Open gururise opened this issue 3 years ago • 2 comments

Attempting to use this library on a gfx1030 (6800XT) with the huggingface transformers results in:

python -m bitsandbytes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++ DEBUG INFORMATION +++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Running a quick check that:
    + library is importable
    + CUDA function is callable

SUCCESS!
Installation was successful!

Trying to load a simple huggingface transformer results in:

=============================================
ERROR: Your GPU does not support Int8 Matmul!
=============================================

python3: /dockerx/temp/bitsandbytes-rocm/csrc/ops.cu:347: int igemmlt(cublasLtHandle_t, int, int, int, const int8_t *, const int8_t *, void *, float *, int, int, int) [FORMATB = 3, DTYPE_OUT = 32, SCALE_ROWS = 0]: Assertion `false' failed.
Aborted (core dumped)

I am using Rocm 5.4.0 (I updated the library paths in the makefile to point to 5.4)

gururise avatar Dec 06 '22 20:12 gururise

Attempting to use this library on a gfx1030 (6800XT) with the huggingface transformers results in:

python -m bitsandbytes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++ DEBUG INFORMATION +++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Running a quick check that:
    + library is importable
    + CUDA function is callable

SUCCESS!
Installation was successful!

Trying to load a simple huggingface transformer results in:

=============================================
ERROR: Your GPU does not support Int8 Matmul!
=============================================

python3: /dockerx/temp/bitsandbytes-rocm/csrc/ops.cu:347: int igemmlt(cublasLtHandle_t, int, int, int, const int8_t *, const int8_t *, void *, float *, int, int, int) [FORMATB = 3, DTYPE_OUT = 32, SCALE_ROWS = 0]: Assertion `false' failed.
Aborted (core dumped)

I am using Rocm 5.4.0 (I updated the library paths in the makefile to point to 5.4)

May I ask what your make command was? I haven't been able to get as far as you have with my installation.

Jarfeh avatar Dec 12 '22 03:12 Jarfeh

Hi,

Currently no released RDNA card can support "Int8 mat mul", that's the original author's words for specific tensor core operations. I have explicitly deleted igemmlt as it's just not available for RDNA platform.

I have successfully used the AdamW 8 bit optimizer from this fork, but nothing else was tested.

broncotc avatar Dec 12 '22 03:12 broncotc