Jueqi Wang

Results 13 comments of Jueqi Wang

Hi! Thanks for your interest in our work! You can download the vgg16 weights from: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt Please don't hesitate to let me know if you run into any issues when...

Hi Jakub, Thank you for your interest in this project! Yeah, I do plan to update the documentation, hopefully before this year's MICCAI (so before early October this year). I...

Hi, Thank you so much for your interest in our work. I plan to release the code this week. I finished writing the code a long time ago, and I...

Hi, for the model part of this project, I use the model from this [Brain LDM paper](https://arxiv.org/pdf/2209.07162.pdf). You can find the autoencoder architecture [here](https://github.com/BioMedAI-UCSC/InverseSR/blob/main/models/aekl_no_attention.py). I use the images from the...

Thanks for your interest in our work! The weight for the pre-trained model could be found in https://drive.google.com/drive/folders/110l68um6gUJzECIv0AyF-4Fcw0rrQgA9?usp=drive_link Please don't hesitate to let me know if you have future questions!

It's an output file that I used to save the result

Hi, Thanks for your interest in our work! Yes, these are 100 subjects used for testing. Please don't hesitate to let me know if you have future questions! Jueqi

Can you share with me what test T1 image and code you use?

Hi, the image put into the model should be registered into a MNI space (the brain needs to be roughly in the center of the image), following what [the pretrained...

Hi, I do think it is because the pre-trained model used corped images (it said in the [Brain LDM](https://arxiv.org/pdf/2209.07162) paper, they cropped the image to obtain a volume of 160...