GxjGit

Results 12 comments of GxjGit

Thanks for your reply. For example, our trainning code is the same as https://github.com/pytorch/examples/blob/main/imagenet/main.py . The dataset is imagenet: https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar . It is a 2-d image dataset with labels. The...

Thanks @harpone. I have tried Webdataset and it does perform very well. I want to try TensorStore if it's better. Like you said, decoding, data augmentation and other transforms must...

> @Fazziekey Hi, have you fixed this problem?

I have not modified the yaml setting. In addition, As I can't find the training model, I have annotated the code of pretrained model loading of UNetModel and AutoencoderKL. I...

> may be you should download the pretrained model from https://huggingface.co/CompVis/stable-diffusion-v1-4 ok, I am downloading it and trying it again.

> conda env create -f environment.yaml it give ResolvePackageNotFound: > > * cudatoolkit=11.3 > * libgcc-ng[version='>=9.3.0'] > * __glibc[version='>=2.17'] > * cudatoolkit=11.3 > * libstdcxx-ng[version='>=9.3.0'] > > can it run...

> may be you should download the pretrained model from https://huggingface.co/CompVis/stable-diffusion-v1-4 @Fazziekey I have update the pretrained model and code, but encountered the same problem. How do we comprehend the...

> Hi,could you please share the detailed link of pretrained model? I only fould some *.ckpt models. @GxjGit Look at this, click the tab of "Files and versions" ![image](https://user-images.githubusercontent.com/24288375/201883703-b6ff6753-2ebb-482a-835a-18775017b7c4.png) And...

> > may be you should download the pretrained model from https://huggingface.co/CompVis/stable-diffusion-v1-4 > > @Fazziekey I have update the pretrained model and code, but encountered the same problem. > >...

> I did not generate a meta list, but I moved all files to a directory to match the meta list with the following command: `find ./churches -name '*.*' |...