OroChippw
OroChippw
> when infer decoder onnx model, some times got error in session.Run, some times got 0x0 size mask output. but python version onnxruntime-gpu 1.14.1 is work fine. error as following:...
> > > > why I got '(1, 4, 1200, 1800)' after I run 'masks.shape' instead of (1, 1, 1200, 1800) Because when you convert torch model to onnx model...
I meet the same proble , when use the inference demo you provide ``` from fastsam import FastSAM, FastSAMPrompt model = FastSAM('./weights/FastSAM.pt') IMAGE_PATH = './images/dogs.jpg' DEVICE = 'cpu' everything_results =...
> ``` > boxes[:, 0] = torch.where(boxes[:, 0] < threshold, torch.tensor(0,dtype=torch.float,device=boxes.device), boxes[:, 0]) # x1 > boxes[:, 1] = torch.where(boxes[:, 1] < threshold, torch.tensor(0,dtype=torch.float,device=boxes.device), boxes[:, 1]) # y1 > boxes[:,...
Then it seems that I need to use better equipment to complete my task. Thank you for your answer 😊
> 我的模型是基于热图的姿态检测,模型输出shape是一个张量(1,3,48,64),如果通过直接用指针方式获取,会打乱数据排布后处理就麻烦一些,所以想请教一下onnxruntime能不能直接获取一个tensor,或者可以对结果进行reshape操作 你好 遇到了同样的问题 请问您知道解决方法了吗 谢谢!
> 我的模型是基于热图的姿态检测,模型输出shape是一个张量(1,3,48,64),如果通过直接用指针方式获取,会打乱数据排布后处理就麻烦一些,所以想请教一下onnxruntime能不能直接获取一个tensor,或者可以对结果进行reshape操作 已经解决 如果获得的输出tensor为masks大小为【1,4,1080,1920】,我们想把它存在一张名为mask的mat里面,可以尝试这样做 ```jsx for (unsigned int index = 0 ; index < 4; index++){ cv::Mat mask(srcImage.rows, srcImage.cols, CV_8UC1); for (unsigned int i = 0; i < mask.rows; i++){...
与此同时,当我查看我的AndroidManifest.xml时,会在Android:versionCode、visionName、screenOrientation几个部分红色报错,并显示“is not allowed here”
Key points in decoupled/non-decoupled mode can be obtained through the function GetKeypointsResult. There is no relevant interface for matching descriptors, but you can add the Get function in the Extractor_PostProcess...
What version of onnxruntime are you using? This repository uses the cpu version onnxruntime-win-x64-1.14.1 Can you locate a line of code where nullptr error occurs?