光源殿下
光源殿下
I run the train with "python -m torch.distributed.launch --nproc_per_node=4" and using dataset=cityscape backbone=resnet50 batchsize=4 my GPU is Nvidia Titan X *4 12GB memory per card, and CUDA out of memory...
I've tried 2 ways to obtain a .pdiparams for C++ inferring: (我想要一个sam模型的静态模型,所以我先试了直接找paddle的export.py,但是没有这个文件,加代码我也没跑通, net = to_static(model, input_spec=[InputSpec(这里要填什么东西啊求助), name='batched_input'), InputSpec(shape[None, 1], dtype=paddle.bool, name='multimaskk_output') ])这第一个batched_input的Inputspec该怎么填啊(我有点菜)) 1, Export from .pdparams. However, export.py is missing...
我这边使用的yolov5的pth模型,转化成onnx之后,通过paddle-lite (2.13rc0)与 X2Paddle(1.4.1) 转化成了nb文件,试图部署在安卓环境下,报错  使用的命令是 x2paddle --framework=onnx --model=onnx_model.onnx --save_dir=pd_model --to_lite=True --lite_valid_places=arm --lite_model_type=naive_buffer 不知道是什么问题?以及问一句,这种转化方式的分辨率是和onnx一致么?精度是多少呢?可以自己定义精度么?
I use offlinemode, and config_file = 'D:/deeplearning/Grounded-Segment-Anything-main/GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py' # change the path of the model config file checkpoint_path = 'D:/deeplearning/Grounded-Segment-Anything-main/bert-base-uncased/groundingdino_swint_ogc.pth' # change the path of the model image_path = 'D:/deeplearning/Grounded-Segment-Anything-main/1.jpg' text_prompt...
After carefully calibrate, I run the copy process, and met multiple failures. Now I print the angle, and just found that the main leader arm shows: >>> print(leader_pos) [ -89.82422...
>>> robot.connect() Connecting main follower arm. Connecting main leader arm. Activating torque on main follower arm. >>> import tqdm >>> seconds = 30 >>> frequency = 200 >>> for _...