Support for Add/Sub with int8, int16, uint8, uint16 in CPU Provider
Describe the issue
ONNX opset 14 supports Add and Sub for int8, int16, uint8, and uint16, but these do not seem to be supported by the default CPU provider in ONNX Runtime.
I have created a patch that adds support for these operations in the CPU provider. Can I just create a pull request to contribute this patch?
To reproduce
import onnx
import onnxruntime as ort
def run_model(typ):
left = onnx.helper.make_tensor('left', typ, [1], [1])
right = onnx.helper.make_tensor('right', typ, [1], [2])
add_node = onnx.helper.make_node(
'Add',
inputs=['left', 'right'],
outputs=['output']
)
output = onnx.helper.make_tensor_value_info('output', typ, [1])
graph = onnx.helper.make_graph(
nodes=[add_node],
name='TestModel',
inputs=[],
outputs=[output],
initializer=[left, right]
)
model = onnx.helper.make_model(graph, opset_imports=[onnx.helper.make_opsetid("", 14)])
session = ort.InferenceSession(model.SerializeToString(), providers=['CPUExecutionProvider'])
return session.run(None, {})
print(run_model(onnx.helper.TensorProto.INT32)) # good
print(run_model(onnx.helper.TensorProto.UINT8)) # [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for Add(14) node with name ''
Urgency
No response
Platform
Linux
OS Version
Ubuntu 22.04
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.16.3
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
Yes, you can create PR to contribute the patch
Thank you for the reply. I created a pull request: https://github.com/microsoft/onnxruntime/pull/19244
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.