Edisonwei54

Results 9 issues of Edisonwei54

mkl-fft==1.3.0 mkl-random==1.1.1 mkl-service==2.3.0 error??? mkl-fft==1.3.6 mkl-random==1.2.2 mkl-service==2.4.0 can finish

**Describe the bug** When I use this component of LocalAI to call the embedding model deployed on another server through the API, the vector computation process is successful, but when...

bug

### Your current environment ```text Collecting environment information... PyTorch version: 2.1.2+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.4...

bug

**Describe the bug** What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图) 微调后的llama3-8B拿去量化,4*24G显存都会oom?已经把quant_n_samples和quant_seqlen减小到32/128了 CUDA_VISIBLE_DEVICES=0,1,2,3 swift export \ --model_type llama3-8b-instruct \ --ckpt_dir/home/greatwall/app/edison/output/llama3-8b-instruct/v2-20240427-073919/checkpoint-438-merged \ --quant_bits 4 \ --quant_method awq \...

question

loading model: /home/ps/app/edison/TigerBot/TigerResearch/tigerbot-70b-chat-4bit-exl2... Traceback (most recent call last): File "/home/ps/app/edison/TigerBot/other_infer/exllamav2_hf_infer.py", line 300, in fire.Fire(main) File "/home/ps/anaconda3/envs/FastChat/lib/python3.10/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/ps/anaconda3/envs/FastChat/lib/python3.10/site-packages/fire/core.py", line...

我完全按照教程进行操作,但是最后打算进行模型推理,不管在命令行还有webdemo都是报这个错误

### Your current environment ```text PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC...

bug

### Feature Idea When I use Comfyui to create a workflow, upload an image, and after the workflow is completed, a new image is successfully generated. I hope that when...

Feature

有关于实例分割训练、测试、推理的例子吗,或者现成模型进行实例分割推理,我想要参考一下