The_Tech_Philosopher
The_Tech_Philosopher
The model has already been created and it has been implemented (merged) into llama.cpp. How can it be used here? https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF https://github.com/ggerganov/llama.cpp/pull/6033
@phymbert Don't know if usefull but it's already up on huggingface. https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1 (You'll find many uploads).
It just works. =D https://huggingface.co/MaziyarPanahi/Mixtral-8x22B-v0.1-GGUF/tree/main
Especially vision would be worth it. But I lack the knowledge to do smth. like this.
Is it natively supported once someone converts it to gguf?
Abetlen already did convert it and tries to create an experimental branch: https://huggingface.co/abetlen/Phi-3.5-vision-instruct-gguf
Could you also upload the q8 version? ✌🏻💫