Dre Dyson
Dre Dyson
> sudo chown -R ollama:ollama Thanks, this helped alot. Since I created a different folder i had to give ollama permissions to the new folder by: `sudo chown -R ollama:ollama...
> I just made enough code changes to run the 7B model on the CPU. That involved > > * Replacing `torch.cuda.HalfTensor` with `torch.BFloat16Tensor` > * Deleting every line of...
Gotcha. So all we would run is > python3 llama/generation.py --max_gen_len 1 ?
> Nice! This works. Thank you!
It was deepseek chat.
I also have a V100 and I'm getting this error too.
Confirmed working. Thanks! > By the way to get Torch 2.4 - simply run `wget -qO- https://raw.githubusercontent.com/unslothai/unsloth/main/unsloth/_auto_install.py | python -` to get the optimal installation command
> Using Torch 2.4.0 did not solve the issue. Complete installation: > > ```shell > conda create --name unsloth_240 python=3.10 pytorch=2.4.0 pytorch-cuda=12.1 \ > cudatoolkit xformers -c pytorch -c nvidia...