Zongyan Han
Zongyan Han
您好!可能是batch size对contrastive embedding影响较大,请参考论文中提供的batch size: `We use a random mini-batch size of 4,096 for AWA1 and AWA2, 2,048 for CUB, 3,072 for FLO, and 1,024 for SUN in our method.'
您好!请参考上条回复。
十分不好意思,SUN的epoch参数有误,350是另外一篇文章里面SUN的epoch参数,请稍等我再整理一下。
> > 十分不好意思,SUN的epoch参数有误,350是另外一篇文章里面SUN的epoch参数,请稍等我再整理一下。 > > 非常感谢,请您再公布一下这两个ins_temp、 cls_temp参数在每个数据集上的设置 您好!这两个参数和论文中相同。
Hi, @mrzhu666 ! Thanks for your interest in our work! We are collecting the parameters for each dataset, and we will release the parameters as soon as possible. Best wishes!
I am sorry for the delay in releasing the parameters. Maybe I can firstly share with you some key parameters: manualSeed:AWA1&2 (9182), FLO (806), SUN (4115) nz=attSize (AWA1/AWA2/CUB/SUN), nz=512 (FLO)...
@webcsm Sorry for the mistake. Yes, much appreciation for your remind. I have corrected my comments.
Hi, @webcsm . In the code, I only use the preprocessing step on all datasets and conduct the MinMax normalization on the visual features. Sorry for the missing in the...
Hi, @webcsm , for AWA1, I take about 150 epochs to achieve the best results. For these hyper-parameters, I tune them based on the performance on the validation set. For...
Hi, @zhihou7! I think these are all the required parameters, and other parameters are basically the same for each dataset. Could you please show the logs and all the parameters...