Chenghao Liu
Chenghao Liu
@kavururajesh we have updated the huggingface interface. Please check the notebook example, you can also refer to this [issue](https://github.com/SalesforceAIResearch/uni2ts/issues/40)
> I have also gotten nan'd weights when doing pretraining on A100s and a 3090 ti when using the lotsa or gluonts datasets. Many of the datasets are very sparse...
closed, the same issue as #122
Hi @Sample-design-alt, have you solved the issue? If so, I will close it. I am not sure if this is caused by model download issue from huggingface. Please check this...
Hi @dany4142, sorry for the late response. 1. you can find the fine-tuned model from `outputs/finetune/${hydra:runtime.choices.model}/${hydra:runtime.choices.data}/${run_name}` which is set in the config file https://github.com/SalesforceAIResearch/uni2ts/blob/cadebd82106e32409b7854b033dbd7a68de87fc0/cli/conf/finetune/default.yaml#L3C10-L3C99 2. you can load the model...
Thanks @yaoqih. If there is no further question, I will close this issue.
The length of each training sample is required to be longer than min_time_patches (2). Here the patch size range is (32, 33), which indicates the training sample should be longer...
Hi @qingyuanxingsi do you have any further questions? If not, i will close this issue
Hi @jpmc216, in the current version, the config format of finetuning data and validation data is different. For the fine-tuning data, you can follow this example https://github.com/SalesforceAIResearch/uni2ts/blob/main/cli/conf/finetune/data/etth1.yaml . You can...
Hi @ngupta-slb, have you solved the problem? If so, I will close this issue.