Request for Code of Tiny-ImageNet
I hope this message finds you well. First and foremost, I would like to express my sincere appreciation for the incredible work you have done on your project shared on GitHub.
I am particularly interested in your Tiny-ImageNet project and would greatly appreciate it if you could share some of the code related to it.
Best regards, kecheng
Hi Kecheng,
Thanks for your interest in this work, and I'm sorry for the delayed response.
The current state of this repo does not contain official configurations for Tiny-ImageNet training, but it supports the Tiny-ImageNet dataset and can build flexible U-Nets. Therefore, you can try to train Tiny-ImageNet models by creating a custom config file, like the following example:
# dataset params
dataset: 'tiny'
classes: 200
# other params ...
network:
image_shape: [3, 64, 64]
n_channels: 192
ch_mults: [1, 2, 3, 4]
is_attn: [False, True, True, True]
attn_channels_per_head: 64
dropout: 0.1
n_blocks: 3
use_res_for_updown: True
# other params ...
This example builds an ADM network (~300M parameters), which is similar to the one mentioned in the EDM paper (Table 7 & 8). This network should yield optimal generative & discriminative performance, but I'm not able to run it due to the large CUDA memory cost. If you have top-end GPUs you can try it :)
If you just want to reproduce the DDPM++ network used in our paper (~60M parameters), please use the config like this:
# dataset params
dataset: 'tiny'
classes: 200
# other params ...
network:
image_shape: [3, 64, 64]
n_channels: 128
ch_mults: [1, 2, 2, 2]
is_attn: [False, False, True, False]
dropout: 0.1
n_blocks: 4
use_res_for_updown: True
# other params ...