build failure on orin agx
llama-cpp/ggml.h(218): error: identifier "__fp16" is undefined i request exllama anyway ( best loader now )
Until the changes from https://github.com/ggerganov/llama.cpp/issues/1455 get merged into ggml we probably can't do anything here.
Regarding exllama that's something to consider after we implemented https://github.com/rustformers/llm/issues/31
nice :) can i build excluding llama.cpp ?
Le dim. 2 juil. 2023 à 15:55, Lukas Kreussel @.***> a écrit :
Until the changes from ggerganov/llama.cpp#1455 https://github.com/ggerganov/llama.cpp/issues/1455 get merged into ggml we probably can't do anything here.
Regarding exllama that's something to consider after we implemented #31 https://github.com/rustformers/llm/issues/31
— Reply to this email directly, view it on GitHub https://github.com/rustformers/llm/issues/341#issuecomment-1616671636, or unsubscribe https://github.com/notifications/unsubscribe-auth/AESIHJKKSWJ45Q4D4LGXEHLXOF4WHANCNFSM6AAAAAAZ3J25PE . You are receiving this because you authored the thread.Message ID: @.***>
I'm afraid not, sorry - our ggml comes from llama,cpp. We don't currently support any other backends.