sungh66
sungh66
## error log | 日志或报错信息 | ログ FATAL ERROR! pool allocator destroyed too early 0x55bcadf83680 still in use 0x55bcabb83480 still in use 0x55bca83a3080 still in use 0x55bcacd83580 still in use...
My loss down to 0.75 and its hard to converge(150epochs). I use the M3 model, how can i fix the issue? My data is good and i had done some...
**Describe the bug** The conda enviroment is OK and i infer my own model what i trained before successfully, but i cannot run the svcg Traceback (most recent call last):...
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 尝试使用ptuningv2来微调模型,用的是提供的衣服数据集,用的是6b-int8的预训练权重(本地下载) ### Expected Behavior _No response_ ### Steps To Reproduce 我使用了本地下载的6b-int8权重,并覆盖了https://huggingface.co/THUDM/chatglm-6b-int8的clone,能正常部署cli_demo.py,...
I want to train on 1 gpu without using distributed training, so i ran > python tools/train.py local_configs/topformer/topformer_small_512x512_160k_2x8_ade20k.py 1 --work-dir runs/ and added ``` import sys sys.path.append("/home/xx/TopFormer-main") ``` to train.py...
Thanks for your work, i have pruning my two-stage detection model faster rcnn with the resnet50+fpn backbone, while running the gen_schema.py, i met a conv pruned issue, becuase the model...
我的问题是大概有1k+条音频序列的数据,在预处理的时候到whisper过程直接killed了,并且前面系统会死机,有办法解决吗,还是因为音频时间有严格的30s限制,我有一些大概35~38s
Traceback (most recent call last): File "/home/xx/Download/pytorch_AMC-master/cifar_search.py", line 151, in main() File "/home/xx/Download/pytorch_AMC-master/cifar_search.py", line 100, in main next_state, real_action, done, score = learner.compress(action) File "/home/xx/Download/pytorch_AMC-master/learners/channel_pruning.py", line 194, in compress next_state,...
I want to add some custom operations, such as file read and write local modules, is this possible? Beacuse i think AutoGPT is too flexible. Also, how does the code...