ebm_code_release icon indicating copy to clipboard operation
ebm_code_release copied to clipboard

The FID performance

Open JoeSandos opened this issue 8 months ago • 1 comments

Hello, thank you for the code released. I have downloaded the codes and pretrained checkpoints, and used the following command to run the evaluation of the unconditional cifar-10 ebm:

python test_inception.py --dataset cifar10 --logdir sandbox_cachedir/cachedir --exp cifar10_large_model_uncond --resume_iter 121200  --ensemble 1 --im_number 50000 --large_model True --num_steps 60 --repeat_scale 10 --step_lr 10

which I found performs better than setting repeat_scale to the default 100. I got the following results:

Inception score of 6.402187347412109 with std of 0.07995259016752243
FID of score 51.90580605898839

Compared to the paper, the inception score is close but the FID is much higher than that in the text. Could you please help me figure out whether my configuration is correct?

P.S. the other hyperparameters used follow the default setting in the code and paper:

test_inception.py 
FLAGS: 
spec_iter: 1
spec_norm_val: 1.0
downsample: False
spec_eval: False
swish_act: False
dsprites_path: /root/data/dsprites-dataset/dsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz
imagenet_datadir: /root/imagenet_big
dshape_only: False
dpos_only: False
dsize_only: False
drot_only: False
dsprites_restrict: False
imagenet_path: /root/imagenet
cutout_inside: False
cutout_prob: 1.0
cutout_mask_size: 16
cutout: False
logdir: sandbox_cachedir/cachedir
exp: cifar10_large_model_uncond
cclass: False
bn: False
spec_norm: True
use_bias: True
use_attention: False
step_lr: 10.0
num_steps: 60
proj_norm: 0.01
batch_size: 512
resume_iter: 121200
ensemble: 1
im_number: 50000
repeat_scale: 10
noise_scale: 0.005
idx: 0
nomix: 10
scaled: True
large_model: True
larger_model: False
wider_model: False
single: False
datasource: random
dataset: cifar10
Model list: [121200]

JoeSandos avatar May 04 '25 07:05 JoeSandos

It's been a while since I've run the numbers, but could you try this command? You might be able to get lower FIDs by decreasing the Langevin step size.

python test_inception.py --exp=cifar10_large_network_smoke --num_steps=10 --batch_size=512 --step_lr=9.0 --resume_iter=121200 --im_number=1000 --repeat_scale=90 --scaled=False --noise_scale=0.005 --nomix=30 --large_model --ensemble=1

yilundu avatar May 04 '25 16:05 yilundu