CLIP-Caption-Reward
CLIP-Caption-Reward copied to clipboard
PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
Hello, I have successfully generated all features (both text and visual) for the COCO dataset. However, when running MLE training, the code throws the following error at the moment it...
I re-train (8 V100) the mle phase using your released config file of `configs/phase1/clipRN50_mle.yml`, but the performance is lower than reported in the paper (CIDEr: 106.5 v.s 110.3). Does the...
Hi, authors. Would you please provide the details of `language_evaluation` in `eval_finecapeval.py` used in Evaluation on FineCapEval?
hello, thank for code. dont you have onnx release ? my raspberry pi 4b reproduce 1 min 20 sec on 1 image (
If I want to select only part of the data set for training, please tell me how to modify the input json file. I don’t understand the relationship between several...
Excuse me, is the downloaded pre-training model already trained? Why is the test result shown in the figure below? 
Hello, I am trying to reproduce your code and I am confused about these parameters ``` input_label_h5: data/cocotalk_label.h5 input_fc_dir: data/cocotalk_clip_RN50_fc input_att_dir: data/cocotalk_clip_RN50_att ``` Could you please elaborate on these parameters?...