onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

Kevin flexible inferencing

Open KevinH48264 opened this issue 3 years ago • 1 comments

Description: Describe your changes.

Motivation and Context

  • Why is this change required? What problem does it solve?
  • If it fixes an open issue, please link to the issue here.

Staging link: https://kevinh48264.github.io/onnxruntime/

KevinH48264 avatar Aug 11 '22 19:08 KevinH48264

CLA assistant check
All CLA requirements met.

ghost avatar Aug 11 '22 19:08 ghost

Hi It seems the section Comparison with PyTorch has the same code as Comparison with OpenVINO

yf711 avatar Dec 05 '22 19:12 yf711

Hi,

Here is the section for Comparison with PyTorch which should be there instead.

Comparison with PyTorch

PyTorch Inferencing on ResNet50

import time

latency = []

with torch.no_grad():

start = time.time()

pt_output = resnet50(input_batch)

latency.append(time.time() - start)

Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes

The output has unnormalized scores. To get probabilities, you can run a

softmax on it.

probabilities = torch.nn.functional.softmax(pt_output[0], dim=0)

Show top categories per image

top5_prob, top5_catid = torch.topk(probabilities, 5)

for i in range(top5_prob.size(0)):

print(categories[top5_catid[i]], top5_prob[i].item())

print("PyTorch CPU/GPU Inference time = {} ms".format(format(sum(latency) * 1000 / len(latency), '.2f')))

print("***** Verifying correctness *****")

for i in range(2):

print('PyTorch and ONNX Runtime output {} are close:'.format(i),

np.allclose(ort_output, pt_output.cpu(), rtol=1e-05, atol=1e-04))

Sample output:

Egyptian cat 0.7860558032989502

tabby 0.1173100471496582

tiger cat 0.020089421421289444

Siamese cat 0.011728067882359028

plastic bag 0.005217487458139658

PyTorch CPU Inference time = 58.14 ms

***** Verifying correctness *****

PyTorch and ONNX Runtime output 0 are close: True

PyTorch and ONNX Runtime output 1 are close: True

Best, Kevin

On Mon, Dec 5, 2022 at 2:58 PM yf711 @.***> wrote:

Hi It seems the section Comparison with PyTorch https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/resnet-inferencing.html#comparison-with-pytorch has the same code as Comparison with OpenVINO https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/resnet-inferencing.html#comparison-with-openvino

— Reply to this email directly, view it on GitHub https://github.com/microsoft/onnxruntime/pull/12566#issuecomment-1338087559, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH5GXGKWEIUSNMEDJAYI3ALWLZCNXANCNFSM56JJD4SQ . You are receiving this because you authored the thread.Message ID: @.***>

KevinH48264 avatar Dec 05 '22 20:12 KevinH48264