wikeeyang
wikeeyang
命令行窗口返回信息如下: Traceback (most recent call last): File "C:\Python\Python311\Lib\site-packages\gradio\routes.py", line 395, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1191, in process_api inputs = self.preprocess_data(fn_index, inputs, state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File...
**Describe the bug** ChatALL运行正常,但不管在啥状态,只要点击清除聊天历史按钮后,输入框就再也无法输入了。 **To Reproduce** 详见录像视频。没打开什么应用,只有CMD-启动模型,Google Chrome是Gradio访问的Web页面,还有录屏软件和ChatALL,其它没啥应用开着。 https://github.com/sunner/ChatALL/assets/131220899/727dfd1e-dc29-4ac4-82d1-1b20660eac09 **Expected behavior** 我暂时还是分析、猜测不出原因,不知道会不会跟某个系统的进程冲突?抑或搜狗输入法冲突? **Desktop (please complete the following information):** Windows 11 x64,32GB内存。Python 3.10.7。
Windows 11 x64, Python 3.10.11 + torch 2.0.2 + cu11.8. Running on local URL: http://127.0.0.1:8888 To create a public link, set `share=True` in `launch()`. IMPORTANT: You are using gradio version...
### System Info / 系統信息 环境状态如下: Windows 11 x64、Python 3.11.9、CUDA 12.1、Torch/torchvision/xformers/transformers/chainlit 关键依赖项,完全按照官方 requirements.txt 安装。后来根据系统提示,加装了:einops-0.8.0、triton-2.1.0、accelerate-0.30.1、psutil-5.9.8系统环境路径设置: CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1 CUDA_VISIBLE_DEVICES=0 为了 4bit 量化加载,修改了 web_demo.py 脚本中模型加载部分的参数,具体如下,原脚本: from transformers import AutoModelForCausalLM, AutoTokenizer,...
大佬,用你昨天发布的源码,跑了一遍,环境和结果描述如下: ENV:windows 11 x64, Python 3.10, Torch 2.1.0, CUDA 11.8, VS 2019 SD Model: 依你推荐的 dreamlike-anime-1.0 demo.json 脚本配置文件没动过,5段对话没能全部跑完,到 dialogue 5 turn 4 报错了,报错信息见下面。 部分结果图片:     大佬给分析分析原因,是环境问题?CUDA版本?CUDA精度?还是模型?参数?还是输出稳定性问题?
pls remove the reverse proxies ... 
the ComfyUI core version is the same, may be Mar 06 version, the node in my Windows env is OK, but in the Ubuntu env don't display the "Upload keywords"...
Hi, I use optimum-quanto v0.2.7 to quantized the https://huggingface.co/tencent/HunyuanImage-3.0 model and save to safetensors file, the command is "quantize(model, weights=qint4)", the method according to https://github.com/huggingface/optimum-quanto#quantization-workflow-for-vanilla-pytorch-models-low-level-api. I can reload successed by...