HAT-L
Dear author, I have been reproducing the neural network model of hat recently. Due to the poor computational power of the server in my laboratory, could you please open source the pre-training model of Hat-L *2? Thank you for your contribution to open source.
@1222056426 You means the HAT-L SRx2 trained on ImageNet? Unfortunately this intermediate checkpoint is not preserved now.
@1222056426 You means the HAT-L SRx2 trained on ImageNet? Unfortunately this intermediate checkpoint is not preserved now.
Ok, thank you for your reply. But I still have a question, how can I use my own data set for training, my data set does not have a uniform resolution size, when I set the path in the yml file, still get an error
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in
@1222056426 You means the HAT-L SRx2 trained on ImageNet? Unfortunately this intermediate checkpoint is not preserved now.
Ok, thank you for your reply. But I still have a question, how can I use my own data set for training, my data set does not have a uniform resolution size, when I set the path in the yml file, still get an error
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/root/miniconda3/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/root/miniconda3/lib/python3.8/site-packages/torch/distributed/launch.py", line 260, in