ffcv icon indicating copy to clipboard operation
ffcv copied to clipboard

Tokenizers in pipelines

Open normster opened this issue 4 years ago • 4 comments

Is there a recommended way of using HuggingFace tokenizers inside ffcv pipelines? I realize I could pre-tokenize the text and store the raw ints in the dataset, but I'd like the flexibility of switching between different tokenizers without re-processing the dataset.

normster avatar Jan 20 '22 04:01 normster

Hello,

I don't have experience with them. Can you provide more information:

  • What FFCV Field are you using ?
  • What is the data type a tokenizer expects as an input ?
  • What is the output data type ?
  • Do they work by batch or by sample ?
  • Are they implemented in python or in lower level language and accessed through cffi or python modules ?

GuillaumeLeclerc avatar Jan 20 '22 05:01 GuillaumeLeclerc

I'm storing the textual metadata in a JSON field. Here is a quick tour of how they work: https://huggingface.co/docs/transformers/preprocessing. They expect strings as input and output dictionaries of int arrays as PyTorch/TensorFlow tensors or a list of int literals. They work on both batched (as a list) or single strings. They come in two varieties: a full python version and a faster version which wraps an underlying Rust implementation. They run on CPU and I estimate that the Python version of the BERT tokenizer processes a sentence about as fast as torchvision takes to process an image with standard ResNet-style augmentations.

normster avatar Jan 20 '22 05:01 normster

Do you need all the three elements of the dict that the tokenizer returns ?

GuillaumeLeclerc avatar Jan 20 '22 07:01 GuillaumeLeclerc

Not always but often- the token types ids are more useful for specific NLP tasks. For my use case (replacing this dataset for CLIP training) we need the attention mask in addition to the input ids because not all captions span the full length.

normster avatar Jan 20 '22 07:01 normster