chenk

Results 32 comments of chenk

class FrameConstructor(nn.Module): def __init__(self): super(FrameConstructor, self).__init__() def forward(self, coeffs, timestamps): # coeffs: [bs, n_deg+1, h, w] # timestamps: [bs, n_ts, h, w] or [bs, n_ts] n_deg = coeffs.shape[1] - 1...

Thank you for your detailed answer, which is very helpful to me. I've understood how the code works. Thanks a lot!

> We export the GT camera poses directly from the Blender file for the synthetic dataset (as is the cozyroom). how about the real-blur dataset?

https://github.com/yenchenlin/nerf-pytorch/assets/72788314/b271f164-a8c6-4c6e-9364-727466ba4fc4 And this is the rendered video, i wonder why the low part is perfect while the high part is almost black, could it be the reason that i comment...

Alternatively, there is another possibility that the issue could be due to my own Lego dataset. When generating the data, I did not follow the official data provided, but instead...

Thanks a lot for your help!

Thanks for sharing your sample data @chensong1995. However, I found that the number of images under "corrupted " is not equal to the number under "resized". And the code source...

> Hello chenkang455, > > The burry images in `corrupted` are generated by event simulator ESIM. I believe the core idea in their implementation is that there is a sliding...

Hello @chensong1995 , Sorry to bother you again. I found some confusing troubles in your test code. ``` def test(self): print('Testing on REDS') for key in self.model.keys(): self.model[key].eval() metrics =...

Hello @chensong1995, Thanks for your replying. But what I want to express is that the .hdf5 file you provided, a video contains 'sharp_frame' for 500 frames and 'blur_frame' for 485...