zhangxianchao
zhangxianchao
Hi IceClear, I found the SPAQ dataset have many images that beyond 2K, such as 5488x4112, 4032x3024, 4000x3000 etc, and from your experiments the SROCC/PLCC of SPAQ was very high....
Hi IceClear, Thanks for your reply! And why meaningless? Did this means that CLIP is only trained on 224x224, it is limited by the pretrainded model?
> Hi. Thanks for your interest in our work. For different attributes, they are actually different attribute pairs. You only need to change the classname in the config file: [here](https://github.com/IceClear/CLIP-IQA/blob/main/configs/clipiqa/clipiqa_attribute_test.py#L16)....