scalar27
scalar27
I've been using llama.cpp for quite a while (M1 Mac). Is there a way I can get ai_voicetalk_local.py to point to that installation instead of reinstalling it here? Sorry, newbie...
Please consider making a version that uses llama.cpp's server instead of ollama. This would give better flexibility in terms of trying new models and settings. Thanks.
What would be involved in getting some of these apps, e.g., chat with pdf, to work with llama.cpp instead of ollama? I'm a big fan of llama.cpp (on a Mac)....
Newbie here, I'm trying to use this cool repo: https://github.com/PkmX/orpheus-chat-webui that uses fastrtc. I'm on a Mac M1 Max. It works fine when I have a wifi connection but fails...
I'm trying to run Chatterbox on an M1 Mac. I'm getting this error: gradio.exceptions.Error: "Error: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are...
Trying to generate using VibeVoice for the first time, the terminal shows: Loading processor & model from microsoft/VibeVoice-Large Fetching 10 files: 0%| | 0/10 [00:00