Micla-SHL
Micla-SHL
``` line: 42,31,88,31,135,31,182,31,182,60,135,60,88,60,42,60,####杰森柔声说 line2: 42,31,88,31,135,31,182,31,182,60,135,60,88,60,42,60,####杰森柔声说 cors: ['42', '31', '88', '31', '135', '31', '182', '31', '182', '60', '135', '60', '88', '60', '42', '60'] points: [42.0, 31.0, 88.0, 31.0, 135.0, 31.0, 182.0,...
Traceback (most recent call last): File "demo/demo.py", line 90, in predictions, visualized_output = demo.run_on_image(img) File "/Micla/Project/AdelaiDet/demo/predictor.py", line 77, in run_on_image traced_script_model = torch.jit.trace(predictions,self.predictor(image)) File "/Micla/Program/Anaconda3/envs/ABCNet/lib/python3.8/site-packages/torch/jit/_trace.py", line 785, in trace name...
我训练ABCNet v2中文识别模型,可训练。在对保存的模型测试时候会出现这个问题 > python tools/train_net.py --config-file /Micla/Project/AdelaiDet/output/batext/rects/v2_attn_R_50/config.yaml --num-gpus 1 --eval-only MODEL.WEIGHTS /Micla/Project/AdelaiDet/output/batext/rects/v2_attn_R_50/model_0029999.pth MODEL.BATEXT.EVAL_TYPE 3 我的训练环境是python3.8 pytorch 1.10.针对训练配置文件做了 NORM: "SyncBN" >修改成 "BN“ 因为我只有单GPU,batch_size 为1 其余没有更改,训练数据是原配置的ReCTS数据集。 我针对标题错误google提示是修改 self.CTLABELS = pickle.load(fp) ==> self.CTLABELS...
作者你好,我是了解到ABCNet后查找到最新的SwinTextSpotter项目,我觉得它应该是比ABCNet更优秀,您在SwinTextSpotter提到更新的ESTextSpotter,我在昨天尝试之后,能 执行 vis.py (需在for循环末尾添加: torch.cuda.empty_cache(),) 我的GPU都是独立的,3060,3070,4090。单张GPU资源能否支持这两个项目之一,我的资源太少了,没有8张。我即使GPU跑一张图显存也不够(在3060实验ESTextSpotter,还未测试4090,)。这两项目对资源的需求能否再下降? 这是可行的吗? 如果我对网络不管精度先剪枝确保能训练这是能被推荐的吗? 还是我这边添加设备会更好些?
detectron2 [Quantization](https://github.com/blueardour/detectron2) 404. 链接有更新了吗?
### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction CUDA_VISIBLE_DEVICES=0 USE_MODELSCOPE_HUB=1 python src/evaluate.py --model_name_or_path LLM-Research/Meta-Llama-3-8B-Instruct --template llama3 --finetuning_type lora --task ceval --split validation...
### Required prerequisites - [X] I have read the documentation . - [X] I have searched the [Issue Tracker](https://github.com/baichuan-inc/baichuan-7B/issues) and [Discussions](https://github.com/baichuan-inc/baichuan-7B/discussions) that this hasn't already been reported. (+1 or comment...
我在ModlScope上只看到34B的参数,这有点大,我希望能在ModelScope上,虽然只是一次性的下载模型,huggingface用掉的节点流量还是不少的,想省一点。智源自己的社区下载方式不常用,hugging跟ModelScope能涵盖所有模型就好了