转模型报错
执行代码: def export_embedding_model(): sam_checkpoint = "D:/Anaconda3/envs/sam_vit_h_4b8939.pth" model_type = "vit_h" device = "cpu" sam = sam_model_registrymodel_type sam.to(device=device) image = cv2.imread('./images/truck.jpg') target_length = sam.image_encoder.img_size pixel_mean = sam.pixel_mean pixel_std = sam.pixel_std img_size = sam.image_encoder.img_size inputs = pre_processing(image, target_length, device,pixel_mean,pixel_std,img_size) onnx_model_path = model_type+"_"+"embedding.onnx" dummy_inputs = { "images": inputs } output_names = ["image_embeddings"] image_embeddings = sam.image_encoder(inputs).cpu().numpy() print('image_embeddings', image_embeddings.shape) with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=torch.jit.TracerWarning) warnings.filterwarnings("ignore", category=UserWarning) with open(onnx_model_path, "wb") as f: torch.onnx.export( sam.image_encoder, tuple(dummy_inputs.values()), f, export_params=True, verbose=False, opset_version=17, do_constant_folding=True, input_names=list(dummy_inputs.keys()), output_names=output_names, # dynamic_axes=dynamic_axes, )
提示错误
环境:
试试 sam_vit_l ,windows 环境 没进行严格测试,建议docker 环境,本工程提供转化原理及代码,删除 预处理和后处理部分,即可,你可以尝试下,这个网络也不是很复杂的,不在维护了,抱歉了各位!!!
可以看看readme.md 文件,有教程的,删除预处理和后处理代码,关注input_dim 维度问题,网络导出流程都是这种思路,python端学习我的思路,以及c++端,预处理和后处理封装代码,有问题在联系我吧!!!预祝你修炼成功。