chenxianghu
chenxianghu
@Justin-Tan We expect your compress effect on ADE20k dataset, can you try it? @Jillian2017 and me try it, but the effect is not so good.Maybe there need some code modification...
I want to test the performance of this Model, so i modify single_plot function like this and then run your compress.py, there are two steps: 1. original image -> quantized...
I test your pre-trained mode, the test data: 1.original image -> quantized representation ----about 1.5s 2.quantized representation -> reconstructed image --- about 1s the test result is different from mine,...
First I train my model using cityscapes 60 epochs, and then continue to train this model using ADE20k 10 epochs, i find the compress effect become wrose.Maybe the model doesn't...
OK, this morning i also read the paper, i find i should train ADE20k from ZERO, but one error occured it seems that the shape of self.w_hat and Gv didn't...
the shape of self.example and self.reconstruction should be the same, for cityscapes dataset it should be [1, 512, 1024, 3], which means [batch_size, height, width, channels]
I modify many places: 1)make my own h5 file, only use 200x200 to 975x975 jpeg images in ADE20K(as the same in the paper) 2)resize image to [512,512], not padding or...
yes, this is the description of the original paper: Data sets: We train the proposed method on two popular data sets that come with hand-annotated semantic label maps, namely Cityscapes...
i checked some jpeg image's size are not in the range from 200x200 to 975x975 such as ADE20K\images\training\h\hacienda\ADE_train_00008829.jpg is 1024x768
@Jillian2017 I add nosie while training the ADE20K dataset by modifying Network.dcgan_generator function to adapt 512x512, my generated images quality is also poor after 40 epoches, some generated images even...