chatglm.cpp icon indicating copy to clipboard operation
chatglm.cpp copied to clipboard

[feature] Add AMD GPU support through ROCm

Open NewJerseyStyle opened this issue 2 years ago • 8 comments

Add CMake flag in CMakeLists.txt refer to llama.cpp

Compile with args:

cmake -B build -DGGML_HIPBLAS=ON -DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang -DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++ && cmake --build build -j

NewJerseyStyle avatar Nov 02 '23 16:11 NewJerseyStyle

Tested on Vega 56 and ChatGLM3 work well.

Source code

Need more test to verify?

⚠️ I later figured out the test was polluted by another build with OPENBLAS in the same environment. And I actually went into issues for ROCm build...

I have solved the problems now... Except:

/workspace/chatglm.cpp.hip/third_party/ggml/src/ggml-cuda.cu:1577:38: error: 'x' is a protected member of '__half2'
    reinterpret_cast<half&>(y[ib].ds.x) = d;

NewJerseyStyle avatar Nov 02 '23 16:11 NewJerseyStyle

Excuse me, did you compile under Windows or Linux? I'm failing in windows cmake, is there any result

lld-link: error: could not open 'm.lib': no such file or directory

yanite avatar Nov 10 '23 15:11 yanite

I built with ROCm in Ubuntu with Docker image rocm/dev-ubuntu-18.04:4.2-complete

According to your description, I believe it is because of the environment path in the CMake file... I hardcoded the library path in the docker image.

Actually I find some problems with my CMake file. It will raise error say during building chatglm.cpp linker cannot find reference of functions in ggml.cpp. I have been confusing about it for few days.

NewJerseyStyle avatar Nov 11 '23 02:11 NewJerseyStyle

ubuntu22.04,rocm5.7.1,6800xt cmake build提示: CMake Warning: Manually-specified variables were not used by the project: GGML_HIPBLAS

llama.cpp可以: -- hip::amdhip64 is SHARED_LIBRARY -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS - Success -- hip::amdhip64 is SHARED_LIBRARY -- HIP and hipBLAS found AMD LLD 17.0.0 (compatible with GNU linkers) -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.9s) -- Generating done (0.0s) -- Build files have been written to:

CellerX avatar Nov 18 '23 23:11 CellerX

ubuntu22.04,rocm5.7.1,6800xt cmake build提示: CMake Warning: Manually-specified variables were not used by the project: GGML_HIPBLAS

llama.cpp可以: -- hip::amdhip64 is SHARED_LIBRARY -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS -- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS - Success -- hip::amdhip64 is SHARED_LIBRARY -- HIP and hipBLAS found AMD LLD 17.0.0 (compatible with GNU linkers) -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- x86 detected -- Configuring done (0.9s) -- Generating done (0.0s) -- Build files have been written to:

Are you using CMakeList.txt from here?

NewJerseyStyle avatar Nov 22 '23 02:11 NewJerseyStyle

I finally figured out what is wrong with my script in my side...

As the only error I encounter is:

...
/workspace/chatglm.cpp.hip/third_party/ggml/src/ggml-cuda.cu:12:10: fatal error: 'hipblas/hipblas.h' file not found
...

I tested everything I can do with the CMakeList.txt cannot help me out of this.

It end up to be a problem of my docker environment rocm/dev-ubuntu-20.04:4.2-complete the include path in hipblas do not have folder hipblas, i.e. I have to edit ggml-cuda.cu include hipblas using #include <hipblas.h> not #include <hipblas/hipblas.h>

NewJerseyStyle avatar Nov 23 '23 00:11 NewJerseyStyle

@CellerX I have checked and patched the CMakeList.txt as much as I can. Can you verify if the problem still occur?

CMakeList.txt

NewJerseyStyle avatar Nov 23 '23 01:11 NewJerseyStyle

thx,i will try again

CellerX avatar Nov 25 '23 00:11 CellerX