Patrick Devine
Patrick Devine
I'm going to go ahead and close the issue, but we can reopen if it's still causing problems.
@uestcxt can you try again? are you still seeing the issue?
@R4ZZ3 I'm not sure how that's related to this issue? I'm assuming you're having problems running a bad GGUF file, but this issue is for `ollama pull`. I'm going to...
Dupe of #3471
I'm going to go ahead and close this since there hasn't been any update for a while.
What safetensors model were you trying to import? Right now only Mistral and Mistral fine tunes are supported. More are coming soon though!
Sorry about that! I have Gemma now working, but haven't yet sent out the PR. I'll add an error message saying that the other models aren't yet supported.
@amnweb can you list which models you tried? I just realized there should be code to catch that.
Can you post one of the modelfiles? I'm trying to figure out if you had converted/quantized these yourself or got ollama to convert the safetensors files.
@amnweb sorry for the slow response! I somehow lost track of this. I don't believe any of the models you converted will work inside of the llama.cpp runner unfortunately.