Guangzhi Wang
Guangzhi Wang
> @daoyuan98 were you able to find a solution for the issue? Unfortunately, no
> @daoyuan98 were you able to solve this? got the exactly same error but I was working on wsl2. unable to solve this..
Hi, thank you for your question. It is indeed unsupervised domain adaptation and I didn't use target label in training. In the code of model/MDAN.py line69~74 """ source_labels = lambda:...
You're welcome. I personally think that is an very interesting and open question. I think one possible reason is that: the goal of this architecture is to make target feature...
Hi, the regular_train_op in line66 is just an op for testing when I was writing the code which optimizes only one classification loss and no domain adversarial loss is optimized....
Hi, thank you for your interests in the code. The domain loss is actually calculated by a binary classification loss. Suppose there are source domains, {S1, S2, ..., Sk}, we...
Hi, I personally think this should not influence the results as long as the domain label is consistent across all source-target pairs, i.e. always set source=1&target=0 or source=0&target=1.
Indeed, I have the same video list as yours Currently, I just ignored the missing videos but I haven't finished the training since I haven't run the densepose yet.
Hi, Thank you for your interest in our work! For visual tokenizer distillation, we followed the protocol in [FD](https://github.com/SwinTransformer/Feature-Distillation) and performed the feature distillation on ImageNet-1K dataset.
Yes, we have met the same issue... We currentlt use batch size of 1 to evaluate all models.