MultiObjectiveOptimization
MultiObjectiveOptimization copied to clipboard
Unable to obtain the same average error as Table 1 in the paper
The average error on celeba is 8.25 in the paper, but my result is 10.14, which is obtained by directly using the codes in this repo. I list the configurations I used below, is there any problem?
in configs.json
"celeba": {
"path": "data/celeba",
"all_tasks": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20",
"21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39"],
"img_rows": 64,
"img_cols": 64
}
in sample.json
{
"optimizer": "Adam",
"batch_size": 256,
"lr": 0.0005,
"dataset": "celeba",
"tasks": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20",
"21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39"],
"normalization_type": "loss+",
"algorithm": "mgda",
"use_approximation": true,
"scales": {"0":0.025, "1":0.025, "2":0.025, "3":0.025, "4":0.025, "5":0.025, "6":0.025, "7":0.025, "8":0.025, "9":0.025, "10":0.025,
"11":0.025, "12":0.025, "13":0.025, "14":0.025, "15":0.025, "16":0.025, "17":0.025, "18":0.025, "19":0.025, "20":0.025,
"21":0.025, "22":0.025, "23":0.025, "24":0.025, "25":0.025, "26":0.025, "27":0.025, "28":0.025, "29":0.025, "30":0.025,
"31":0.025, "32":0.025, "33":0.025, "34":0.025, "35":0.025, "36":0.025, "37":0.025, "38":0.025, "39":0.025}
}
and the learning rate is multiplied by 0.85 every 10 epochs:
if (epoch+1) % 10 == 0:
# Every 50 epoch, half the LR
for param_group in optimizer.param_groups:
param_group['lr'] *= 0.85
logger.info('Multiply the learning rate by {} [{} steps]'.format(0.85, n_iter))
and I train the model on two GPUs.
The average error on celeba is 8.25 in the paper, but my result is 10.14, which is obtained by directly using the codes in this repo. I list the configurations I used below, is there any problem?
in configs.json
"celeba": { "path": "data/celeba", "all_tasks": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39"], "img_rows": 64, "img_cols": 64 }in sample.json
{ "optimizer": "Adam", "batch_size": 256, "lr": 0.0005, "dataset": "celeba", "tasks": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39"], "normalization_type": "loss+", "algorithm": "mgda", "use_approximation": true, "scales": {"0":0.025, "1":0.025, "2":0.025, "3":0.025, "4":0.025, "5":0.025, "6":0.025, "7":0.025, "8":0.025, "9":0.025, "10":0.025, "11":0.025, "12":0.025, "13":0.025, "14":0.025, "15":0.025, "16":0.025, "17":0.025, "18":0.025, "19":0.025, "20":0.025, "21":0.025, "22":0.025, "23":0.025, "24":0.025, "25":0.025, "26":0.025, "27":0.025, "28":0.025, "29":0.025, "30":0.025, "31":0.025, "32":0.025, "33":0.025, "34":0.025, "35":0.025, "36":0.025, "37":0.025, "38":0.025, "39":0.025} } and the learning rate is multiplied by 0.85 every 10 epochs:if (epoch+1) % 10 == 0: # Every 50 epoch, half the LR for param_group in optimizer.param_groups: param_group['lr'] *= 0.85 logger.info('Multiply the learning rate by {} [{} steps]'.format(0.85, n_iter))and I train the model on two GPUs.
Hi, Ghost! I also failed to obtain the same results in Table 1. Have you solved this problem yet?