baize-chatbot icon indicating copy to clipboard operation
baize-chatbot copied to clipboard

Let ChatGPT teach your own chatbot in hours with a single GPU!

Results 37 baize-chatbot issues
Sort by recently updated
recently updated
newest added

CUDA SETUP: Detected CUDA version 113 CUDA SETUP: Loading binary /opt/conda/envs/py38/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda113.so... Downloading and preparing dataset json/default to /root/.cache/huggingface/datasets/json/default-e59c3670f1657ac9/0.0.0/fe5dd6ea2639a6df622901539cb550cf8797e5a6b2dd7af1cf934bed8e233e6e... Downloading data files: 100%|██████████| 1/1 [00:00

If it is running on the server, how do I get it to run on the specified gpu?

I have tried the 8bit option as well but no change. It generates tokens slowly and CPU goes high (>80%). GPU jumps up too but always < 20%. So it...

**Hello, when I run demo/app.py with 7B model, I got this problem 'addmm_impl_cpu_" not implemented for 'Half'. Could you please tell me how to fix it?** This share link expires...

I collect some chinese data about "中国云南" like this: ![0417-2](https://user-images.githubusercontent.com/52442277/232364095-2bf77e7b-f850-46ba-ae5f-5d9777404b1c.png) And train follow the readme base on [Baize-7B](https://huggingface.co/project-baize/baize-lora-7B), cost 48 hours, get checkpoints finally. when I use this checkpoints to...

decapoda-research/llama-7b-hf has been reported that the weight converted is not suitable to current transformers, please give an exact transformer version.

Hi, Do you guys have any plans to make a gptq 4-bit quantized version of your models. That would cut VRAM usage and improve inference speed a lot, without much...

enhancement

I can run the 7b without issue but loading 13b I get the follow error. The error comes up soon as the first message is sent. `Traceback (most recent call...

I use this repo to finetune bloomz-7b1-mt with alpaca data (50k conversation) and the results are terrible. It takes 8 hours to train with the same arguments as in how...

_At first, I wrote the same thing in the [hugging face community](https://huggingface.co/spaces/project-baize/Baize-7B/discussions/2), and then I realized that I should write in Github so that it would be easier for you...