Sample-design-alt

Results 8 comments of Sample-design-alt

yes, I have the same issue for this problem.

> I use Ubantu system, the error as follows: ERROR: Command errored out with exit status 1: command: /usr/local/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/data/chenrj/SimTSC-main/pydtw/setup.py'"'"'; __file__='"'"'/data/chenrj/SimTSC-main/pydtw/setup.py'"'"';f...

I have the same question with you. The paper refers to 'It excels in zero-shot forecasting for out-of-domain data'. But how can the model get the 'prompt token'? The prompt...

I have the same problem with you.

LLM的backbone是冻结的,其余都是trainable,我采用GPT2、BERT、都会有一样的结果。

效果提升了一些,但仍然和其他方法相比仍然有差距,我发现如果patch_num 如果mean之后,再做分类,效果很差。所以最后一层的nn.linear(D*patch_num, num_classes)这样会好一些。你的输出层是怎么设计的?

可以,你也可以直接issue交流,更方便讨论

你是在哪儿数据集测试,datsets descrption?模型forward是怎么写的