StableViewSynthesis icon indicating copy to clipboard operation
StableViewSynthesis copied to clipboard

Leave-one-out rendering

Open grgkopanas opened this issue 4 years ago • 8 comments

Hi,

I am trying to do a leave-one-out rendering for the input cameras, I have been doing it this way: In this line https://github.com/intel-isl/StableViewSynthesis/blob/main/experiments/dataset.py#L263 I am adding a nbs.remove(idx) before passing it to the next function.

The results are rather extremely blurry, am I doing something wrong?

All the best, George Kopanas

grgkopanas avatar Apr 01 '21 17:04 grgkopanas

You should be able to just use/adapt this method. It creates an Dataset object with mode=train, which is exactly the leave-one-out behaviour that you want.

griegler avatar Apr 07 '21 16:04 griegler

In more detail, add an entry here with some name that calls a modified (such that it loads your models) get_train_set_tat method. Then you can just use the provided code for evaluation and use your evaluation set name.

griegler avatar Apr 07 '21 16:04 griegler

Hi,

Thanks for the reply. I managed to use get_train_set_tat but I am quickly running out of GPU memory in train mode in a 16GB GPU. With get_eval_set_tat I can render just fine.

Is there any quick and dirty way to use less GPU with get_train_set_tat ?

Cheers, George

grgkopanas avatar Apr 08 '21 12:04 grgkopanas

What do you use for n_nbs? You could try to reduce the number.

griegler avatar Apr 08 '21 18:04 griegler

I had that at default which seems to be 3. Seems already pretty small to me right?

grgkopanas avatar Apr 10 '21 08:04 grgkopanas

Yes, that should be more than feasible. Batch size is set to 1?

griegler avatar Apr 12 '21 11:04 griegler

Batch size is 1, both for eval and train. I am attaching my git diff in case you notice something. Unfortunately I don't have a GPU with bigger memory available.

svs_diff.txt and my command line arguments are: --net resunet3.16_penone.dirs.avg.seq+9+1+unet+5+2+16.single+mlpdir+mean+3+64+16 --cmd eval --iter last --eval-dsets tat-scene-hugo --eval-scale 0.45

Thanks again, George

grgkopanas avatar Apr 12 '21 21:04 grgkopanas

In more detail, add an entry here with some name that calls a modified (such that it loads your models) get_train_set_tat method. Then you can just use the provided code for evaluation and use your evaluation set name.

Hi @griegler Could you point to the code which removes the target image from the neighbors when the Dataset object is created with mode=train?

I was able to adapt my dataset and run evaluation, however I was wondering if there is actually leave one out rendering occuring? If the target image exists in the dataset, wouldn't the unprojected points on the mesh, always take points from the very same target image, or could you please clarify, if I understand something incorrectly.

Perhaps, I need to define a subseq and explicitly add indices that are to be skipped in the source images, is that correct?

akashsharma02 avatar Jul 15 '21 15:07 akashsharma02