luoruijie

Results 12 comments of luoruijie

> 从报错中发现您的环境应该是V100机器,cuda版本应该不会是11.2(请通过nvcc -V确认而非nvidia-smi).您的问题应该是由于cuda版本不支持bf16导致的,抱歉我们最近的修改没有对BF16进行完整测试,我们将很快对此进行修复 借个楼。大哥,麻烦测试的测一下别的功能啊。就比如说model_zoo/ernie_3.0中的代码功能啊,都买了百度的算力了。稍微给点力啊。

i also meet this problem, and after i convert mp3 to wav and then use torch audio to load wav file ,it works

但是当我使用fine-tune Mistral 7B on imdb这段代码的时候, 却报了ValueError: **adalomo is not a valid OptimizerNames,** please select one of ['adamw_hf', 'adamw_torch', 'adamw_torch_fused', 'adamw_torch_xla', 'adamw_torch_npu_fused', 'adamw_apex_fused', 'adafactor', 'adamw_anyprecision', 'sgd', 'adagrad', 'adamw_bnb_8bit', 'adamw_8bit', 'lion_8bit', 'lion_32bit',...

I meet the same error , **package version:** bitsandbytes 0.43.1 transformers 4.40.0 torch 2.2.2+cu118 torchaudio 2.2.2+cu118 torchvision 0.17.2+cu118 My actions are as follows **First i use quantization code to quantize...

我这边是 paddle版本升级到最新版解决了。请问别的兄弟是怎么解决的?

我也遇到后,但是后来把Paddlepaddle的版本升高后,就不报这个错了

When I add —quantization fp8,it doesn’t work Errors below: ![Image](https://github.com/user-attachments/assets/a2b24704-e91f-4a42-9cdc-4d4dd2bf60c0)

Yes,when I deploy neuralmagic/DeepSeek-R1-Distill-Llama-70B-FP8-dynamic ,it works without "--quantization FP8" parameter! but when I deploy qwen2.5-72B-instruct ,it also works with "--quantization FP8" parameter.