optimum icon indicating copy to clipboard operation
optimum copied to clipboard

Consistent use of `"sequence-classification"` vs. `"text-classification", "audio-classification"`

Open fxmarty opened this issue 3 years ago • 2 comments

Currently, transformers' FeaturesManager._TASKS_TO_AUTOMODELS to handle strings passed to load models. Notably, this is used in the ORTQuantizer.from_pretrained() method (where here, for example, feature="sequence-classification"):

https://github.com/huggingface/optimum/blob/5653a16727fc99b627d45827485b2ac0ace4c66f/optimum/onnxruntime/quantization.py#L102

In the meanwhile, pipeline abstraction for text classification expects pipeline(..., task="text-classification"). Hence it could be troublesome for users to pass both "text-classification" and "sequence-classification".

A handy workflow could be the following:

from onnxruntime.quantization import QuantFormat, QuantizationMode, QuantType
from optimum.onnxruntime import ORTQuantizer
from optimum.onnxruntime.configuration import QuantizationConfig
from optimum.onnxruntime.modeling_ort import ORTModel

from optimum.pipelines import pipeline as _optimum_pipeline
from transformers import pipeline as _transformers_pipeline

from optimum.onnxruntime.modeling_ort import ORTModelForSequenceClassification

static_quantization = False
task = "text-classification"

# Create the quantization configuration containing all the quantization parameters
qconfig = QuantizationConfig(
    is_static=static_quantization,
    format=QuantFormat.QDQ if static_quantization else QuantFormat.QOperator,
    mode=QuantizationMode.QLinearOps if static_quantization else QuantizationMode.IntegerOps,
    activations_dtype=QuantType.QInt8 if static_quantization else QuantType.QUInt8,
    weights_dtype=QuantType.QInt8,
    per_channel=False,
    reduce_range=False,
    operators_to_quantize=["Add"],
)

quantizer = ORTQuantizer.from_pretrained(
    "Bhumika/roberta-base-finetuned-sst2",
    feature=task,
    opset=15,
)

tokenizer = quantizer.tokenizer

model_path = "model.onnx"
quantized_model_path = "quantized_model.onnx"

quantization_preprocessor = None
ranges = None

# Export the quantized model
quantizer.export(
    onnx_model_path=model_path,
    onnx_quantized_model_output_path=quantized_model_path,
    calibration_tensors_range=ranges,
    quantization_config=qconfig,
    preprocessor=quantization_preprocessor,
)

ort_session = ORTModel.load_model(quantized_model_path)
ort_model = ORTModelForSequenceClassification(ort_session, config=quantizer.model.config)

task_alias = "text-classification"
ort_pipeline = _optimum_pipeline(
    task=task,
    model=ort_model,
    tokenizer=tokenizer,
    feature_extractor=None,
    accelerator="ort"
)

which currently raises KeyError: "Unknown task: text-classification for ORTQuantizer.from_pretrained().

Right now we need to pass something like

task = "text-classification"
feature = "sequence-classification"

and provide the feature to ORTQuantizer, which is troublesome.

Possible solutions are:

  • Have an auto-mapping from "tasks" (as in https://huggingface.co/models ) to "features" ("text-classification" --> "sequence-classification", "audio-classification" --> "sequence-classification")
  • Modify transformers.onnx.FeaturesManager to use real tasks and not "sequence-classification"
  • Add abstraction classes like ForTextClassification, ForAudioClassification classes just inheriting from ForSequenceClassification and modify transformers.onnx.FeaturesManager accordingly

@lewtun

fxmarty avatar May 10 '22 09:05 fxmarty

Thanks for creating this detailed issue @fxmarty!

One challenge with unifying the "features" used in the ONNX export and the tasks defined in the pipeline() function is that one can have past key values that need to be differentiated, e.g. these two features are different:

  • causal-lm
  • causal-lm-with-past

Having said that, I agree that it would be nice if one could reuse the same task taxonomy from the transformers.pipeline() function, so maybe some light refactoring can capture the majority of tasks.

cc @michaelbenayoun who knows more of the history behind the ONNX "features" names

lewtun avatar May 10 '22 10:05 lewtun

Yes, I think the original feature names were chosen by looking at the classes names (BertForSequenceClassification, etc). I think @fxmarty's first suggestion could work and is easy to implement.

michaelbenayoun avatar May 11 '22 12:05 michaelbenayoun