model_optimization icon indicating copy to clipboard operation
model_optimization copied to clipboard

Graph or model_exporter doesn't preserve list/tuple and dict outputs

Open mikeseven opened this issue 2 years ago • 5 comments

Issue Type

Feature Request

Source

source

MCT Version

main

OS Platform and Distribution

ubuntu 20.04, pytorch 2

Python version

3.11

Describe the issue

A model with multiple outputs ordered in a list, tuple, or dict is correctly mapped to FX.
However, when the model is exported after quantization, all this information is lost and a set is returned (if the output had activations in 2 lists, only 1 is returned).
When it comes to dict, only unique values are returned.

This is a big problem for multi-head models eg object detection etc.

I think it's a bug but looking at the code, I suspect it's a missing feature that needs to be implemented in multiple places.

Expected behaviour

list, tuple, dict must be preserved exactly like FX traces them.

Code to reproduce the issue

simply make a simple model with output (list1, list2). 
You will get all elements of both lists after pytorch model reconstruction and it is even in the Graph. If some elements are present in both lists, only one will returned.
Likewise for dict, no dict is returned, only unique values.

Log output

No response

mikeseven avatar Aug 31 '23 23:08 mikeseven

here is a simple graph that return a dict with 2 lists:

graph():
    %x : [#users=5] = placeholder[target=x]
    %layers_0 : [#users=1] = call_module[target=layers.0](args = (%x,), kwargs = {})
    %layers_1 : [#users=1] = call_module[target=layers.1](args = (%x,), kwargs = {})
    %layers_2 : [#users=1] = call_module[target=layers.2](args = (%x,), kwargs = {})
    %layers_3 : [#users=1] = call_module[target=layers.3](args = (%x,), kwargs = {})
    %layers_4 : [#users=1] = call_module[target=layers.4](args = (%x,), kwargs = {})
    return {'y1': (layers_0, layers_1, layers_2, layers_3, layers_4), 'y2': (layers_0, layers_1, layers_2, layers_3, layers_4)}

the output has 2 items ie. 2 lists

Through MCT, we have 5 items ie. the unique values of both lists combined.

graph():
    %_args : [#users=1] = placeholder[target=*args]
    %getitem : [#users=5] = call_function[target=operator.getitem](args = (%_args, 0), kwargs = {})
    %layers_0 : [#users=1] = call_module[target=layers_0](args = (%getitem,), kwargs = {})
    %layers_1 : [#users=1] = call_module[target=layers_1](args = (%getitem,), kwargs = {})
    %layers_2 : [#users=1] = call_module[target=layers_2](args = (%getitem,), kwargs = {})
    %layers_3 : [#users=1] = call_module[target=layers_3](args = (%getitem,), kwargs = {})
    %layers_4 : [#users=1] = call_module[target=layers_4](args = (%getitem,), kwargs = {})
    return [layers_0, layers_1, layers_2, layers_3, layers_4]

In object detection where we have outputs like 'bbox', 'scores', 'classes', it's critical to maintain lists and/or keys.

mikeseven avatar Sep 01 '23 00:09 mikeseven

Hi @mikeseven,

This is a familiar limitation of MCT (see Technical Constraints on our project's website).

I can see that you managed to fix it on your fork. If this solution fixes the issue for you feel free to open a PR and we will review it.

ofirgo avatar Sep 03 '23 11:09 ofirgo

Hi @mikeseven

Wanted to make sure you got the answer and ask whether you'll be interested in contributing to Sony MCT with your solution, which would be much appreciated.

Also, I would love to chat with you in case you're interested, please shoot me an email if you do ([email protected])

ServiAmirPM avatar Sep 05 '23 07:09 ServiAmirPM

I'm still making sure I didn't forget anything but the code solving this issue is already in my branch multi-head_preservation. Currently, testing with today's main branch changes. Looks good so far, will make PR tonight.

mikeseven avatar Sep 07 '23 19:09 mikeseven

Stale issue message

github-actions[bot] avatar Nov 07 '23 10:11 github-actions[bot]