transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Different sentiment class probabilities for sequential processing vs batch processing

Open Kayne88 opened this issue 3 years ago • 2 comments

System Info

platform: windows python: 3.7 transformers: latest Model: finetuned BERT (from cardiffnlp/twitter-roberta-base-sentiment)

I have posted the issue here https://discuss.huggingface.co/t/different-sentiments-when-texts-processed-in-batches-vs-singles/19462, but didn't receive any answer. The behavior is not really explainable and might look like a bug.

Cheers

@LysandreJik

Who can help?

No response

Information

  • [X] The official example scripts
  • [ ] My own modified scripts

Tasks

  • [ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [X] My own task or dataset (give details below)

Reproduction

see description under the above link.

Expected behavior

I would expect the class probabilities to be equal for a given text, no matter if the classification is done sequentially or in batches.

Kayne88 avatar Jul 06 '22 18:07 Kayne88

Hi @Kayne88 -- could you please share a complete script for reproducibility? In your original script you were missing the definition of tokenizer and model :)

gante avatar Jul 14 '22 11:07 gante

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Aug 07 '22 15:08 github-actions[bot]