mdawid

Results 5 comments of mdawid

@davidliudev, I think it's still possible to do logical reasoning with the current model size. This is what I get with 4-bit model from here: https://huggingface.co/eachadea/ggml-vicuna-13b-4bit using llama.cpp. ``` >...

Here's what my local 4-bit model printed: ``` > If my Bluetooth earphone is broken, shall I see otologist or dentist? If your Bluetooth earphone is broken, you should seek...

@mrsipan Very nice output. Could you share your llama.cpp parameters?

@Holpak @KiraCoding Windows Error 0xc000001d occurs when the operating system detects an illegal or invalid instruction in a program. Python package is compiled from source with default llama.cpp flags LLAMA_AVX:...

@Koalamanx @treyg It works fine for me when I use previous default model in the configuration, namely 'gpt-3.5-turbo-1106'. It's possible that the new default model (gpt-4o-mini) takes the system prompt...