fairseq-image-captioning icon indicating copy to clipboard operation
fairseq-image-captioning copied to clipboard

Training end-to-end on my own dataset

Open MiriamFarber opened this issue 4 years ago • 0 comments

I have my own dataset of (image, caption) pairs on which I'd like to train the model. Does this repository enables to do that without first extracting features/bounding boxes?

Can I do it via avoiding passing the flag --features ?

MiriamFarber avatar Feb 25 '21 20:02 MiriamFarber