coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Implement PyTorch op `upsample_bicubic2d`

Open AustinStarnes opened this issue 3 years ago • 11 comments

Name of layer type upsample_bicubic2d
PyTorch or TensorFlow PyTorch
coremltools Version 6.1
PyTorch Version 1.12.1
Impact On one hand, it seems that other upsampling methods are implemented, so I'm inclined to say it's low priority. However, one could argue that different upsampling methods introduce noise differently to their data. Regardless, I'm hoping coremltools aspires to the same feature set as PyTorch, and implements functionality from common image processing libraries (such as pillow, which is probably why PyTorch implemented this to begin with, i.e. PIL.Image.BICUBIC)

I can't recommend the prioritization of this one over other missing torch ops, but I figured I could create a ticket to track discussion of this layer type.

Here is a minimal environment you could create to reproduce:

conda create -n bicubic2d pytorch::pytorch==1.12.1 torchvision
conda activate bicubic2d
pip install coremltools==6.1

And here is a minimal script to trigger the error notice that the op is unimplemented:

import torch
from torchvision.transforms import InterpolationMode, Resize

import coremltools as ct
class Net(torch.nn.Module):
    def forward(self, img):
        return Resize((336, 336), InterpolationMode.BICUBIC)(img)

model = Net()
model.eval()

example_input = torch.rand(1, 3, 112, 112) 
traced_model = torch.jit.trace(model, example_input)
out = traced_model(example_input)

model = ct.convert(
    traced_model,
    inputs=[ct.TensorType(shape=example_input.shape)]
)

AustinStarnes avatar Jan 29 '23 08:01 AustinStarnes

upsample_bicubic2d is now supported.

TobyRoseman avatar Sep 29 '23 15:09 TobyRoseman

@TobyRoseman The commit you refer is upsample_bilinear2d, the op name in torch is upsample_bicubic2d. I got same error in coremltool==7 and torch==2.0.0 when have interpolate with bicubic

PyTorch convert function for op 'upsample_bicubic2d' not implemented

darrenxyli avatar Oct 20 '23 06:10 darrenxyli

@darrenxyli - you're correct. Sorry for the confusion. Reopening this issue.

TobyRoseman avatar Oct 20 '23 20:10 TobyRoseman

Any update on this? It would be very useful, especially upsample_bicubic2d_aa (bicubic with anti-aliasing) as it would be the closest to what PIL does.

An update to the code you provided to include anti-aliasing:

import torch
from torchvision.transforms import InterpolationMode, Resize

import coremltools as ct
class Net(torch.nn.Module):
    def forward(self, img):
        return Resize((336, 336), InterpolationMode.BICUBIC, antialias=True)(img)

model = Net()
model.eval()

example_input = torch.rand(1, 3, 112, 112) 
traced_model = torch.jit.trace(model, example_input)
out = traced_model(example_input)

model = ct.convert(
    traced_model,
    inputs=[ct.TensorType(shape=example_input.shape)]
)

And the error it produces:

PyTorch convert function for op '_upsample_bicubic2d_aa' not implemented.

Joony avatar Dec 21 '23 09:12 Joony

Does anyone know when coremltools will support upsample_bicubic2d?

zhengweix avatar Mar 27 '24 00:03 zhengweix

@TobyRoseman , @zhengweix

another upvote for this feature!

I am experimenting with wuerstchen and cascade models that depend on decent up/down sampling with anti-aliasing.

all related tickets are being closed, is this still in the feature bin?

Perhaps there exists an alternative conversion method ?

SpiraMira avatar Apr 21 '24 21:04 SpiraMira

@TobyRoseman @zhengweix also upvoting! I'm stuck converting a DINOv2-based model because of the same error:

RuntimeError                              Traceback (most recent call last)
Cell In[10], [line 1](vscode-notebook-cell:?execution_count=10&line=1)
----> [1](vscode-notebook-cell:?execution_count=10&line=1) mlmodel = ct.convert(
      [2](vscode-notebook-cell:?execution_count=10&line=2)     traceable_model,
      [3](vscode-notebook-cell:?execution_count=10&line=3)     inputs=[ct.ImageType(name="input", shape=input_tensor.shape)],
      [4](vscode-notebook-cell:?execution_count=10&line=4) )

File ~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:581, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline)
    [573](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:573)     specification_version = _set_default_specification_version(exact_target)
    [575](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:575) use_default_fp16_io = (
    [576](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:576)     specification_version is not None
    [577](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:577)     and specification_version >= AvailableTarget.iOS16
    [578](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:578)     and need_fp16_cast_pass
    [579](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:579) )
--> [581](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:581) mlmodel = mil_convert(
    [582](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:582)     model,
    [583](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:583)     convert_from=exact_source,
    [584](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:584)     convert_to=exact_target,
    [585](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:585)     inputs=inputs,
    [586](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:586)     outputs=outputs_as_tensor_or_image_types,  # None or list[ct.ImageType/ct.TensorType]
    [587](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:587)     classifier_config=classifier_config,
    [588](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:588)     skip_model_load=skip_model_load,
    [589](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:589)     compute_units=compute_units,
    [590](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/_converters_entry.py:590)     package_dir=package_dir,
...
    [114](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:114)         )
    [116](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:116) logger.info("Converting op {} : {}".format(node.name, op_lookup))
    [118](https://file+.vscode-resource.vscode-cdn.net/Users/alberto/code/depth-anything-v2-coreml/~/miniconda3/envs/coreml-conversions/lib/python3.11/site-packages/coremltools/converters/mil/frontend/torch/ops.py:118) scopes = []

RuntimeError: PyTorch convert function for op 'upsample_bicubic2d' not implemented.

and I don't know how to overcome this! Are there ways to work around this while we wait?

snowzurfer avatar Jun 17 '24 11:06 snowzurfer

Anybody have success converting depth anything v2 to coreml ? huggingface have coreml model but only for smallest model

x4080 avatar Jul 07 '24 22:07 x4080