vidurbervilles
vidurbervilles
Thanks for your reply @okunator!
> Thank you so much for sharing! > > As mentioned in the title, could you provide insights into the number of epochs required to achieve high-resolution, fine details during...
Hi @explainingai-code, Thanks for replying!! Let me share the config file. ``` dataset_config: im_channels : 3 im_size : 256 name: 'cell' diffusion_params: num_timesteps : 1000 beta_start: 0.00085 beta_end: 0.012 ldm_params:...
Many thanks for your kind reply @explainingai-code! I am testing with more vectors in the codebook - do you think it can help based on your experience? Additionally, what parameter...
> @Vadori You can try with more vectors, I didnt find much help on the dataset that I worked with(CelebHQ) but give it a try, might help for your case....
Hi again @explainingai-code, Thank you once again for your helpful responses, so much appreciated! I noticed that in the current implementation, the cross-attention mechanism is applied only when using text...
Hi @explainingai-code, Thank you! Why would you say that avoiding cross-attention would work better? I am interested in your intuition, even though you did not experiment with it. I am...
> By better, I am mainly referring to how easy it is for the model to learn 'how to use spatial conditioning' . In concatenation, because of the convolution layer,...
> Got it. Regarding simple downscaling leading to loss of details, another thing you could try is instead of passing a downsampled version, pass normal (same size as original image)...
@ericup My question is slightly different. I noticed that the model without the refinement module tends to produce overly regularized (i.e., overly smooth) shapes, even when using 128 contour points....