Kanzhi Cheng

Results 36 comments of Kanzhi Cheng

after 1.changing the open mode to "r" or "rt": `with open(infile, "r") as tsv_in_file` 2.changing `base64.decodestring(item[field])` to `base64.decodebytes(bytes(item[field], encoding="utf8"))` It can work.

> > after 1.changing the open mode to "r" or "rt": `with open(infile, "r") as tsv_in_file` 2.changing `base64.decodestring(item[field])` to `base64.decodebytes(bytes(item[field], encoding="utf8"))` > > It can work. > > ![image](https://user-images.githubusercontent.com/73008189/120953517-b10b1680-c77f-11eb-8bc5-f8747ebe0c28.png) >...

Thanks for you remind. But could you explain why you think the dataset is not available? I took a look at the dataset and it seems no problem, maybe you...

Wait..@Doragd So you think the FlickrStyle10K(in fact, 7K) dataset is feasible for stylish image captioning, but the result in Stylenet is exaggerated? And by the way, what's the result in...

Hello. Have you try to reproduce the CVAE baseline? In my experiment, this vanilla CVAE (means a N(0,1) prior distribution) could achieve much better result than he reported, by simply...

For me, I want to use the checkpoint provided by author (the unzip upload_causal_motif_sgdet file), I need to set MODEL.PRETRAINED_DETECTOR_CKPT and OUTPUT_DIR the correct file path, for example `MODEL.PRETRAINED_DETECTOR_CKPT xxx/Scene-Graph-Benchmark/checkpoints/upload_causal_motif_sgdet...

@Srikeshram What the batch_size you can set when finetuning OFA-large on your custom captioning dataset? The max batch_size less than 10 in 32G V100 with my code.