LIU Man
LIU Man
I trained a quantized transformer model in CPU environment and inference in CPU environment. The training process I add --quantization-config-path in fairseq-train. But the inference speed in CPU in **3...
Hi, here is a issue that if I use the original parameters, the performance is quite low. After I increased the epochs to 15, the F1 of CONLL03 is only...
I run this sytem with the default configuration. But the results keep " EPOCH:143 F:0.111 P:0.111 R:0.111 ACC:1.000 LOSS:3.237". Even I run 2000 epoches, F, P and R remain no...
When I want to inference the finetuned model with vLLM, I got this error. I have saved unsloth finetuned model to HF model already. vLLM==0.4.0+cu118 unsloth==2024.5 transformers==4.40.2