pinghe

Results 15 comments of pinghe

Hi, I have the same problem with you. Have you solved it? Since I have changed `robot` and the position of object such as `peg` in `PegInHole`, I'm not sure...

> > hello,when i install vision by source code,i got this error: > > ```shell > > /home/gai_test/wuyang/vision/torchvision/csrc/ops/nms.cpp:22:5: error: ‘class torch::Library’ has no member named ‘set_python_module’ > > 22 |...

> 我们还是建议直接在windows裸机上部署。 你好,我在windows裸机上本地部署internvl2-8B,运行示例代码: ``` from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig, GenerationConfig from lmdeploy.vl import load_image model = 'D:\\xxxx\\InternVL2-8B' system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。' chat_template_config = ChatTemplateConfig('internvl-internlm2') chat_template_config.meta_instruction = system_prompt pipe = pipeline(model,...

> 创建 pipeline 的时候,加入参数 log_level="INFO",然后再运行一遍demo,贴下详细的日志 换成量化版本解决了

> Just comment line 29~35 and don't use `try_to_load_from_cache`, write the `FONT_PATH` all by yourself: 干脆注释掉29~35行,也别用`try_to_load_from_cache`,纯手动把`FONT_PATH`写了: > > ```python > FONT_PATH = "xxx/Qwen-VL-Chat/SimSun.ttf" > # if FONT_PATH is None: >...

> > > Just comment line 29~35 and don't use `try_to_load_from_cache`, write the `FONT_PATH` all by yourself: 干脆注释掉29~35行,也别用`try_to_load_from_cache`,纯手动把`FONT_PATH`写了: > > > ```python > > > FONT_PATH = "xxx/Qwen-VL-Chat/SimSun.ttf" > >...

> 你可以和qwen-vl-chat-int4中的tokenization_qwen.py进行对比,在int4版本中是没有你说的29-35行的内容,所以我的建议是直接全部注释掉? 我用的是int4版本的,在window环境运行,然后代码是: ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation import GenerationConfig # 如果您希望结果可复现,可以设置随机数种子。 # torch.manual_seed(1234) tokenizer = AutoTokenizer.from_pretrained("D:\\xxx\\Qwen-VL-Chat-int4", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("D:\\xxx\\Qwen-VL-Chat-int4", device_map="cuda", trust_remote_code=True).eval() query =...

> 嗯嗯,总之没法连外网的server就避开requests就是了,这个字体应该是用来给圈定的图片内容写标记文本的,模型目录已经有字体文件了还去外网请求没啥必要。 我不知道它在哪里生成了这个脚本并自动去请求外网了

> 咦,那就还是加载失败了,因为QWenTokenizer这个类就是在你修改过的`tokenization_qwen.py`中定义的;按说qwen不会不支持加载本地分词器啊,本地的模型文件目录下面东西是齐的不,`qwen.tiktoken`、`tokenization_qwen.py`、`tokenizer_config.json`啥的都在?这可有点玄学了Orz 跑通了!你提到模型目录下的tokenization_qwen.py,我没意识到是这个文件夹下的,我一直改的是c盘下面的那个,不好意思,感谢感谢!