In-Context learning, Relevant Series and self.label_len
Hello, I’m very interested in your paper and algorithm — thank you for sharing the code.
I have a question regarding the In-Context Learning (Prompting Mechanism) and the Relevant Series described in the paper.
Based on the figure in the paper (attached image), I’m wondering whether the relevant series corresponds to the self.label_len variable in data_loader.py. Could you please clarify this point?
If not, could you please explain the role of self.label_len in the code?
Thanks :)
Thanks for your question. label_len here is used for next token prediction. Give n tokens from a training sample, the seq_len=n*token_len indicates the whole length of the training sample, which contains n-1 training signals of next tokens. Therefore, label_len means the input part of a sample. By shifting it with the token_len at n-1 position, we will get the groundtruth value as the optimization objective.