Will you share the source code for TimerMixer++?
Many thanks for your interest in our latest work. As per our usual practice, we will release our code when the camera-ready version of the paper is published. You can keep an eye on the TimeMixer repository, and we will notify you as soon as the code is released.
感谢杰出的 work ! TimeMixer++论文已经放出了,代码请问已经更新了吗?
感谢杰出的 work ! TimeMixer++论文已经放出了,代码请问已经更新了吗?
非常感谢您对于我们工作的关注,按照我们的惯例,我们将在正式接收发表的camera-ready版本中公布我们的代码,您可以持续关注我们的代码仓库,更新后的第一时间会和您通知,谢谢对于我们工作的支持。
感谢优秀的 work ! TimeMixer++论文已经放出了,但是代码似乎撤回了?请问什么时候会重新放出呢?
感谢优秀的 work ! TimeMixer++论文已经放出了,但是代码似乎撤回了?请问什么时候会重新放出呢?
你好,非常感谢对我们的关注,目前我们还没有更新代码,我们将在正式接收发表的camera-ready版本中公布我们的代码,您可以持续关注我们的代码仓库,再次感谢您对于我们的支持。
When will you release the source code of TimeMixer++? By the way, it's really a very impressive model!
iclr25已公开论文,代码请问什么时候放出呢?
Hello, we continue to pay attention to your outstanding work. When can you provide the source code of TimeMixer++?
Thank you all for your interest and support for our work! At the moment, the relevant code has not yet received full approval for open-sourcing. In light of this, we have organized and released the available code and model weights in an anonymous GitHub repository for your convenience. Once we obtain full open-source approval, we will also work towards releasing the project on Hugging Face. Please note, however, that we have not received company authorization to release under the Apache license. As such, the current release is not permitted for use in commercial projects. We appreciate your understanding and continued support!
The training part is not given in the anonymous Github repository. Can you share the offical code sooner? The paper and the model are good.
Thank you all for your interest and support for our work! At the moment, the relevant code has not yet received full approval for open-sourcing. In light of this, we have organized and released the available code and model weights in an anonymous GitHub repository for your convenience. Once we obtain full open-source approval, we will also work towards releasing the project on Hugging Face. Please note, however, that we have not received company authorization to release under the Apache license. As such, the current release is not permitted for use in commercial projects. We appreciate your understanding and continued support!
感谢杰出的工作,可以提供PEM数据集的配置参数吗?还是说直接参考TimeMixer的?还有,我复现过程发现模型训练的显存占用和训练速度并不像论文figure14所示的和TimeMixer差不多。可能和我设置的参数相关?目前主要是参考Traffic数据集设置的。
Thank you all for your interest and support for our work! At the moment, the relevant code has not yet received full approval for open-sourcing. In light of this, we have organized and released the available code and model weights in an anonymous GitHub repository for your convenience. Once we obtain full open-source approval, we will also work towards releasing the project on Hugging Face. Please note, however, that we have not received company authorization to release under the Apache license. As such, the current release is not permitted for use in commercial projects. We appreciate your understanding and continued support!
Thank you for the excellent work and for sharing the code! I'm currently using this model for electricity load forecasting, so I tried training timemixer++ on the ECL dataset with the timemixer code. I have to admit, the results are really impressive. However, there's something that's been bothering me.
I noticed that the model's final output and the metric calculations are all done on normalized data. For my electricity forecasting application, this doesn't make much sense. The difference between the original data and the normalized data is too significant, and the results in this form are not usable in real-world electricity forecasting scenarios. Also, I noticed that in your paper, you cited models like Informer, Autoformer, and Time-Series-Library, and they all seem to follow the same approach. I don’t quite understand why the outputs aren't converted back to the original scale or why the metrics aren’t calculated on the original scale. Could you help explain this? By the way, in my forecasting scenario, I only split the data into training and testing sets without creating a separate validation set. Would this be okay for this model?
Additionally, I observed that when running your scripts, the gpu utilization was quite low, and the prediction process was very slow. However, when I used the timemixer code to train timemixer++, it ran much faster. Here are my training logs. Could you help me check if there’s something wrong with my setup?
start training : long_term_forecast_ECL_96_96_none_TimeMixerPP_custom_sl96_pl96_dm16_nh8_el3_dl1_df64_fc3_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>> train 18221 val 2537 test 5165 Epoch: 1 cost time: 756.7272894382477 Epoch: 1, Steps: 569 | Train Loss: 0.2647970 Vali Loss: 0.1805777 Test Loss: 0.2024910 Validation loss decreased (inf --> 0.202491). Saving model ... Updating learning rate to 0.0004361031845833939 iters: 100, epoch: 2 | loss: 0.2019584 speed: 3.3137s/iter; left time: 239130.4876s iters: 200, epoch: 2 | loss: 0.2064125 speed: 0.9925s/iter; left time: 71527.0733s iters: 300, epoch: 2 | loss: 0.1806236 speed: 0.9967s/iter; left time: 71725.1869s iters: 400, epoch: 2 | loss: 0.1718689 speed: 1.0334s/iter; left time: 74266.1321s iters: 500, epoch: 2 | loss: 0.1790082 speed: 1.0203s/iter; left time: 73222.5566s Epoch: 2 cost time: 735.3920855522156 Epoch: 2, Steps: 569 | Train Loss: 0.1913048 Vali Loss: 0.1612632 Test Loss: 0.1810411 Validation loss decreased (0.202491 --> 0.181041). Saving model ... Updating learning rate to 0.0005438696383598014 iters: 100, epoch: 3 | loss: 0.1737544 speed: 3.3320s/iter; left time: 238557.6399s iters: 200, epoch: 3 | loss: 0.1789310 speed: 1.0318s/iter; left time: 73767.7909s iters: 300, epoch: 3 | loss: 0.1665391 speed: 1.0309s/iter; left time: 73603.6197s iters: 400, epoch: 3 | loss: 0.1848269 speed: 1.0256s/iter; left time: 73123.6398s iters: 500, epoch: 3 | loss: 0.1668039 speed: 1.0318s/iter; left time: 73461.2600s Epoch: 3 cost time: 751.6240499019623 Epoch: 3, Steps: 569 | Train Loss: 0.1762433 Vali Loss: 0.1515081 Test Loss: 0.1697921 Validation loss decreased (0.181041 --> 0.169792). Saving model ... Updating learning rate to 0.0007216782312573101 iters: 100, epoch: 4 | loss: 0.1836338 speed: 3.4291s/iter; left time: 243557.9046s iters: 200, epoch: 4 | loss: 0.1758635 speed: 1.0429s/iter; left time: 73970.9727s iters: 300, epoch: 4 | loss: 0.1549026 speed: 1.0467s/iter; left time: 74133.3897s iters: 400, epoch: 4 | loss: 0.1693477 speed: 1.0495s/iter; left time: 74225.1523s iters: 500, epoch: 4 | loss: 0.1675291 speed: 1.0631s/iter; left time: 75079.4604s Epoch: 4 cost time: 776.1883296966553 Epoch: 4, Steps: 569 | Train Loss: 0.1669124 Vali Loss: 0.1438198 Test Loss: 0.1633192 Validation loss decreased (0.169792 --> 0.163319). Saving model ... Updating learning rate to 0.0009668541897551314 iters: 100, epoch: 5 | loss: 0.2054136 speed: 3.5833s/iter; left time: 252470.1488s iters: 200, epoch: 5 | loss: 0.1449289 speed: 1.0766s/iter; left time: 75745.1565s iters: 300, epoch: 5 | loss: 0.1641962 speed: 1.0796s/iter; left time: 75852.3986s iters: 400, epoch: 5 | loss: 0.1423285 speed: 1.0942s/iter; left time: 76763.0595s iters: 500, epoch: 5 | loss: 0.1692078 speed: 1.1129s/iter; left time: 77965.2895s Epoch: 5 cost time: 810.8060607910156 Epoch: 5, Steps: 569 | Train Loss: 0.1600826 Vali Loss: 0.1375290 Test Loss: 0.1579647 Validation loss decreased (0.163319 --> 0.157965). Saving model ... Updating learning rate to 0.00127570933348449 iters: 100, epoch: 6 | loss: 0.1811412 speed: 3.7949s/iter; left time: 265215.6239s iters: 200, epoch: 6 | loss: 0.1560985 speed: 1.1357s/iter; left time: 79256.7592s iters: 300, epoch: 6 | loss: 0.1489340 speed: 1.1283s/iter; left time: 78625.7490s iters: 400, epoch: 6 | loss: 0.1682404 speed: 1.1230s/iter; left time: 78150.5873s iters: 500, epoch: 6 | loss: 0.1598627 speed: 1.1313s/iter; left time: 78610.1398s Epoch: 6 cost time: 835.6478009223938 Epoch: 6, Steps: 569 | Train Loss: 0.1534857 Vali Loss: 0.1326768 Test Loss: 0.1532239 Validation loss decreased (0.157965 --> 0.153224). Saving model ... Updating learning rate to 0.0016435975565022264 iters: 100, epoch: 7 | loss: 0.1480751 speed: 3.8564s/iter; left time: 267322.8725s iters: 200, epoch: 7 | loss: 0.1680434 speed: 1.1435s/iter; left time: 79149.8896s iters: 300, epoch: 7 | loss: 0.1503858 speed: 1.1010s/iter; left time: 76102.4247s iters: 400, epoch: 7 | loss: 0.1449037 speed: 1.0821s/iter; left time: 74688.2330s iters: 500, epoch: 7 | loss: 0.1438249 speed: 1.0718s/iter; left time: 73866.1101s Epoch: 7 cost time: 804.6097757816315 Epoch: 7, Steps: 569 | Train Loss: 0.1466978 Vali Loss: 0.1248935 Test Loss: 0.1462551 Validation loss decreased (0.153224 --> 0.146255). Saving model ... Updating learning rate to 0.0020649847186326436 iters: 100, epoch: 8 | loss: 0.1424668 speed: 3.5710s/iter; left time: 245505.2574s iters: 200, epoch: 8 | loss: 0.1597175 speed: 1.0626s/iter; left time: 72950.9013s iters: 300, epoch: 8 | loss: 0.1518050 speed: 1.0612s/iter; left time: 72746.0483s iters: 400, epoch: 8 | loss: 0.1528331 speed: 1.0613s/iter; left time: 72646.9084s iters: 500, epoch: 8 | loss: 0.1401727 speed: 1.0633s/iter; left time: 72675.0088s Epoch: 8 cost time: 782.5107827186584 Epoch: 8, Steps: 569 | Train Loss: 0.1416359 Vali Loss: 0.1227273 Test Loss: 0.1446138 Validation loss decreased (0.146255 --> 0.144614). Saving model ... Updating learning rate to 0.0025335318955026316 iters: 100, epoch: 9 | loss: 0.1640212 speed: 3.5917s/iter; left time: 244883.5439s iters: 200, epoch: 9 | loss: 0.1424782 speed: 1.0803s/iter; left time: 73550.7044s iters: 300, epoch: 9 | loss: 0.1406053 speed: 1.0691s/iter; left time: 72681.7942s iters: 400, epoch: 9 | loss: 0.1344687 speed: 1.0734s/iter; left time: 72863.9119s iters: 500, epoch: 9 | loss: 0.1344212 speed: 1.0729s/iter; left time: 72720.7651s Epoch: 9 cost time: 790.8796648979187 Epoch: 9, Steps: 569 | Train Loss: 0.1383337 Vali Loss: 0.1234406 Test Loss: 0.1445752 Validation loss decreased (0.144614 --> 0.144575). Saving model ... Updating learning rate to 0.0030421907349402868 iters: 100, epoch: 10 | loss: 0.1221671 speed: 3.6158s/iter; left time: 244474.7717s iters: 200, epoch: 10 | loss: 0.1388021 speed: 1.0803s/iter; left time: 72931.8338s iters: 300, epoch: 10 | loss: 0.1417881 speed: 1.0787s/iter; left time: 72718.5007s iters: 400, epoch: 10 | loss: 0.1355302 speed: 1.0773s/iter; left time: 72517.9725s iters: 500, epoch: 10 | loss: 0.1564880 speed: 1.0761s/iter; left time: 72323.5529s Epoch: 10 cost time: 788.7694547176361 Epoch: 10, Steps: 569 | Train Loss: 0.1358557 Vali Loss: 0.1225463 Test Loss: 0.1448935 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.0035833094852913607 iters: 100, epoch: 11 | loss: 0.1306558 speed: 3.5695s/iter; left time: 239309.6703s iters: 200, epoch: 11 | loss: 0.1292739 speed: 1.0730s/iter; left time: 71832.1571s iters: 300, epoch: 11 | loss: 0.1189102 speed: 1.0767s/iter; left time: 71971.7447s iters: 400, epoch: 11 | loss: 0.1148966 speed: 1.0708s/iter; left time: 71470.4896s iters: 500, epoch: 11 | loss: 0.1668994 speed: 1.0813s/iter; left time: 72061.8719s Epoch: 11 cost time: 788.3503956794739 Epoch: 11, Steps: 569 | Train Loss: 0.1347841 Vali Loss: 0.1217528 Test Loss: 0.1436831 Validation loss decreased (0.144575 --> 0.143683). Saving model ... Updating learning rate to 0.004148748100670248 iters: 100, epoch: 12 | loss: 0.1256359 speed: 3.5872s/iter; left time: 238456.5907s iters: 200, epoch: 12 | loss: 0.1215385 speed: 1.0943s/iter; left time: 72636.3414s iters: 300, epoch: 12 | loss: 0.1364799 speed: 1.0889s/iter; left time: 72162.5911s iters: 400, epoch: 12 | loss: 0.1285805 speed: 1.0904s/iter; left time: 72153.0903s iters: 500, epoch: 12 | loss: 0.1385314 speed: 1.1060s/iter; left time: 73080.3693s Epoch: 12 cost time: 803.489027261734 Epoch: 12, Steps: 569 | Train Loss: 0.1334861 Vali Loss: 0.1211780 Test Loss: 0.1425641 Validation loss decreased (0.143683 --> 0.142564). Saving model ... Updating learning rate to 0.004730000691617947 iters: 100, epoch: 13 | loss: 0.1310799 speed: 3.6822s/iter; left time: 242675.1602s iters: 200, epoch: 13 | loss: 0.1252051 speed: 1.1022s/iter; left time: 72531.4118s iters: 300, epoch: 13 | loss: 0.1324320 speed: 1.1022s/iter; left time: 72420.4552s iters: 400, epoch: 13 | loss: 0.1329231 speed: 1.1098s/iter; left time: 72811.3946s iters: 500, epoch: 13 | loss: 0.1390626 speed: 1.0981s/iter; left time: 71928.7995s Epoch: 13 cost time: 808.8779151439667 Epoch: 13, Steps: 569 | Train Loss: 0.1323428 Vali Loss: 0.1211742 Test Loss: 0.1422590 Validation loss decreased (0.142564 --> 0.142259). Saving model ... Updating learning rate to 0.005318323479142555 iters: 100, epoch: 14 | loss: 0.1322173 speed: 3.6603s/iter; left time: 239146.1068s iters: 200, epoch: 14 | loss: 0.1258045 speed: 1.1045s/iter; left time: 72056.4090s iters: 300, epoch: 14 | loss: 0.1308311 speed: 1.1126s/iter; left time: 72471.0808s iters: 400, epoch: 14 | loss: 0.1096189 speed: 1.1031s/iter; left time: 71742.5051s iters: 500, epoch: 14 | loss: 0.1165087 speed: 1.1097s/iter; left time: 72057.6367s Epoch: 14 cost time: 813.2106020450592 Epoch: 14, Steps: 569 | Train Loss: 0.1318251 Vali Loss: 0.1196903 Test Loss: 0.1417156 Validation loss decreased (0.142259 --> 0.141716). Saving model ... Updating learning rate to 0.005904866327330485 iters: 100, epoch: 15 | loss: 0.1328065 speed: 3.7229s/iter; left time: 241119.7179s iters: 200, epoch: 15 | loss: 0.1321158 speed: 1.1187s/iter; left time: 72343.7692s iters: 300, epoch: 15 | loss: 0.1203708 speed: 1.1039s/iter; left time: 71276.5483s iters: 400, epoch: 15 | loss: 0.1346501 speed: 1.1211s/iter; left time: 72276.8856s iters: 500, epoch: 15 | loss: 0.1562731 speed: 1.1225s/iter; left time: 72250.1356s Epoch: 15 cost time: 814.9609882831573 Epoch: 15, Steps: 569 | Train Loss: 0.1314835 Vali Loss: 0.1189521 Test Loss: 0.1438855 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.006480805875884149 iters: 100, epoch: 16 | loss: 0.1382683 speed: 3.6578s/iter; left time: 234824.5235s iters: 200, epoch: 16 | loss: 0.1208789 speed: 1.0888s/iter; left time: 69787.8019s iters: 300, epoch: 16 | loss: 0.1287526 speed: 1.1095s/iter; left time: 71006.8119s iters: 400, epoch: 16 | loss: 0.1200616 speed: 1.1173s/iter; left time: 71395.5879s iters: 500, epoch: 16 | loss: 0.1363845 speed: 1.1028s/iter; left time: 70358.7361s Epoch: 16 cost time: 806.88671708107 Epoch: 16, Steps: 569 | Train Loss: 0.1308369 Vali Loss: 0.1195413 Test Loss: 0.1423072 EarlyStopping counter: 2 out of 15 Updating learning rate to 0.007037478269874255 iters: 100, epoch: 17 | loss: 0.1160005 speed: 3.6684s/iter; left time: 233418.1647s iters: 200, epoch: 17 | loss: 0.1320223 speed: 1.1077s/iter; left time: 70373.7366s iters: 300, epoch: 17 | loss: 0.1400796 speed: 1.1114s/iter; left time: 70496.7972s iters: 400, epoch: 17 | loss: 0.1349611 speed: 1.1089s/iter; left time: 70224.1652s iters: 500, epoch: 17 | loss: 0.1367357 speed: 1.1122s/iter; left time: 70322.0230s Epoch: 17 cost time: 812.928195476532 Epoch: 17, Steps: 569 | Train Loss: 0.1301353 Vali Loss: 0.1217013 Test Loss: 0.1443913 EarlyStopping counter: 3 out of 15 Updating learning rate to 0.007566509490053841 iters: 100, epoch: 18 | loss: 0.1203674 speed: 3.6941s/iter; left time: 232950.2027s iters: 200, epoch: 18 | loss: 0.1253801 speed: 1.1099s/iter; left time: 69881.0709s iters: 300, epoch: 18 | loss: 0.1170303 speed: 1.1100s/iter; left time: 69775.0307s iters: 400, epoch: 18 | loss: 0.1261112 speed: 1.1110s/iter; left time: 69723.7878s iters: 500, epoch: 18 | loss: 0.1464421 speed: 1.1179s/iter; left time: 70049.4050s Epoch: 18 cost time: 813.0010011196136 Epoch: 18, Steps: 569 | Train Loss: 0.1296439 Vali Loss: 0.1200742 Test Loss: 0.1446345 EarlyStopping counter: 4 out of 15 Updating learning rate to 0.008059941323176025 iters: 100, epoch: 19 | loss: 0.1214533 speed: 3.6898s/iter; left time: 230579.2640s iters: 200, epoch: 19 | loss: 0.1245885 speed: 1.1129s/iter; left time: 69436.6002s iters: 300, epoch: 19 | loss: 0.1245924 speed: 1.1132s/iter; left time: 69341.4095s iters: 400, epoch: 19 | loss: 0.1425354 speed: 1.1173s/iter; left time: 69483.6497s iters: 500, epoch: 19 | loss: 0.1240206 speed: 1.1194s/iter; left time: 69502.0105s Epoch: 19 cost time: 822.0454754829407 Epoch: 19, Steps: 569 | Train Loss: 0.1295379 Vali Loss: 0.1200511 Test Loss: 0.1428440 EarlyStopping counter: 5 out of 15 Updating learning rate to 0.008510351077344748 iters: 100, epoch: 20 | loss: 0.1263539 speed: 3.7566s/iter; left time: 232613.7427s iters: 200, epoch: 20 | loss: 0.1187757 speed: 1.1186s/iter; left time: 69154.3312s iters: 300, epoch: 20 | loss: 0.1373482 speed: 1.1196s/iter; left time: 69105.0846s iters: 400, epoch: 20 | loss: 0.1567149 speed: 1.1282s/iter; left time: 69522.0716s iters: 500, epoch: 20 | loss: 0.1515603 speed: 1.1275s/iter; left time: 69364.4968s Epoch: 20 cost time: 823.766086101532 Epoch: 20, Steps: 569 | Train Loss: 0.1281382 Vali Loss: 0.1174305 Test Loss: 0.1411486 Validation loss decreased (0.141716 --> 0.141149). Saving model ... Updating learning rate to 0.008910963241521299 iters: 100, epoch: 21 | loss: 0.1266839 speed: 3.7462s/iter; left time: 229842.8568s iters: 200, epoch: 21 | loss: 0.1243480 speed: 1.1236s/iter; left time: 68825.3901s iters: 300, epoch: 21 | loss: 0.1199915 speed: 1.1093s/iter; left time: 67839.6759s iters: 400, epoch: 21 | loss: 0.1314764 speed: 1.1203s/iter; left time: 68399.7094s iters: 500, epoch: 21 | loss: 0.1308424 speed: 1.1226s/iter; left time: 68422.9836s Epoch: 21 cost time: 826.518247127533 Epoch: 21, Steps: 569 | Train Loss: 0.1287808 Vali Loss: 0.1187790 Test Loss: 0.1415707 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.009255751409493335 iters: 100, epoch: 22 | loss: 0.1549387 speed: 3.8115s/iter; left time: 231678.8401s iters: 200, epoch: 22 | loss: 0.1276367 speed: 1.1414s/iter; left time: 69262.2688s iters: 300, epoch: 22 | loss: 0.1412580 speed: 1.1378s/iter; left time: 68930.3101s iters: 400, epoch: 22 | loss: 0.1388814 speed: 1.1405s/iter; left time: 68979.3570s iters: 500, epoch: 22 | loss: 0.1211233 speed: 1.1367s/iter; left time: 68640.2472s Epoch: 22 cost time: 839.4935293197632 Epoch: 22, Steps: 569 | Train Loss: 0.1275341 Vali Loss: 0.1179090 Test Loss: 0.1409055 Validation loss decreased (0.141149 --> 0.140906). Saving model ... Updating learning rate to 0.00953952893506483 iters: 100, epoch: 23 | loss: 0.1430758 speed: 3.8458s/iter; left time: 231572.4882s iters: 200, epoch: 23 | loss: 0.1514685 speed: 1.1313s/iter; left time: 68006.0719s iters: 300, epoch: 23 | loss: 0.1548619 speed: 1.1460s/iter; left time: 68778.4242s iters: 400, epoch: 23 | loss: 0.1287902 speed: 1.1476s/iter; left time: 68759.8925s iters: 500, epoch: 23 | loss: 0.1221040 speed: 1.1298s/iter; left time: 67581.6119s Epoch: 23 cost time: 836.227929353714 Epoch: 23, Steps: 569 | Train Loss: 0.1379215 Vali Loss: 0.1189515 Test Loss: 0.1409961 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.00975802695474148 iters: 100, epoch: 24 | loss: 0.1394719 speed: 3.8070s/iter; left time: 227072.6245s iters: 200, epoch: 24 | loss: 0.1276447 speed: 1.1391s/iter; left time: 67828.8806s iters: 300, epoch: 24 | loss: 0.1340853 speed: 1.1153s/iter; left time: 66297.3793s iters: 400, epoch: 24 | loss: 0.1531461 speed: 1.1226s/iter; left time: 66619.2120s iters: 500, epoch: 24 | loss: 0.1259017 speed: 1.1287s/iter; left time: 66868.8516s Epoch: 24 cost time: 834.0026700496674 Epoch: 24, Steps: 569 | Train Loss: 0.1323397 Vali Loss: 0.1247473 Test Loss: 0.1474355 EarlyStopping counter: 2 out of 15 Updating learning rate to 0.00990795860421683 iters: 100, epoch: 25 | loss: 0.1494782 speed: 3.8226s/iter; left time: 225829.6084s iters: 200, epoch: 25 | loss: 0.1317555 speed: 1.1274s/iter; left time: 66487.7582s iters: 300, epoch: 25 | loss: 0.1249609 speed: 1.1341s/iter; left time: 66771.2953s iters: 400, epoch: 25 | loss: 0.1275704 speed: 1.1353s/iter; left time: 66728.0288s iters: 500, epoch: 25 | loss: 0.1231593 speed: 1.1186s/iter; left time: 65634.8291s Epoch: 25 cost time: 829.4594376087189 Epoch: 25, Steps: 569 | Train Loss: 0.1303625 Vali Loss: 0.1200667 Test Loss: 0.1412977 EarlyStopping counter: 3 out of 15 Updating learning rate to 0.009987068462650923 iters: 100, epoch: 26 | loss: 0.1184397 speed: 3.7719s/iter; left time: 220685.1751s iters: 200, epoch: 26 | loss: 0.1477326 speed: 1.1195s/iter; left time: 65390.6223s iters: 300, epoch: 26 | loss: 0.1457703 speed: 1.1236s/iter; left time: 65516.3195s iters: 400, epoch: 26 | loss: 0.1186529 speed: 1.1191s/iter; left time: 65140.9817s iters: 500, epoch: 26 | loss: 0.1353677 speed: 1.1232s/iter; left time: 65269.4359s Epoch: 26 cost time: 823.9760458469391 Epoch: 26, Steps: 569 | Train Loss: 0.1283893 Vali Loss: 0.1177257 Test Loss: 0.1404042 Validation loss decreased (0.140906 --> 0.140404). Saving model ... Updating learning rate to 0.009999620195133964 iters: 100, epoch: 27 | loss: 0.1568359 speed: 3.7479s/iter; left time: 217150.0734s iters: 200, epoch: 27 | loss: 0.1162058 speed: 1.1172s/iter; left time: 64618.7169s iters: 300, epoch: 27 | loss: 0.1212554 speed: 1.1190s/iter; left time: 64611.8196s iters: 400, epoch: 27 | loss: 0.1313380 speed: 1.1318s/iter; left time: 65237.2783s iters: 500, epoch: 27 | loss: 0.1280621 speed: 1.1249s/iter; left time: 64727.4381s Epoch: 27 cost time: 825.2092733383179 Epoch: 27, Steps: 569 | Train Loss: 0.1272240 Vali Loss: 0.1192657 Test Loss: 0.1416844 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.009995377074165727 iters: 100, epoch: 28 | loss: 0.1252169 speed: 3.7786s/iter; left time: 216777.0777s iters: 200, epoch: 28 | loss: 0.1274050 speed: 1.1331s/iter; left time: 64890.1740s iters: 300, epoch: 28 | loss: 0.1205370 speed: 1.1397s/iter; left time: 65155.8838s iters: 400, epoch: 28 | loss: 0.1329421 speed: 1.1363s/iter; left time: 64851.2820s iters: 500, epoch: 28 | loss: 0.1263644 speed: 1.1273s/iter; left time: 64220.7642s Epoch: 28 cost time: 830.3269376754761 Epoch: 28, Steps: 569 | Train Loss: 0.1243257 Vali Loss: 0.1175510 Test Loss: 0.1400369 Validation loss decreased (0.140404 --> 0.140037). Saving model ... Updating learning rate to 0.009986432497967705 iters: 100, epoch: 29 | loss: 0.1320557 speed: 3.7669s/iter; left time: 213965.3757s iters: 200, epoch: 29 | loss: 0.1540249 speed: 1.1384s/iter; left time: 64548.0865s iters: 300, epoch: 29 | loss: 0.1324210 speed: 1.1400s/iter; left time: 64522.4873s iters: 400, epoch: 29 | loss: 0.1272685 speed: 1.1382s/iter; left time: 64307.4729s iters: 500, epoch: 29 | loss: 0.1147796 speed: 1.1305s/iter; left time: 63761.4617s Epoch: 29 cost time: 831.008050441742 Epoch: 29, Steps: 569 | Train Loss: 0.1239177 Vali Loss: 0.1161080 Test Loss: 0.1388716 Validation loss decreased (0.140037 --> 0.138872). Saving model ... Updating learning rate to 0.009972794884861934 iters: 100, epoch: 30 | loss: 0.1183965 speed: 3.7607s/iter; left time: 211474.4543s iters: 200, epoch: 30 | loss: 0.1224973 speed: 1.1463s/iter; left time: 64344.3692s iters: 300, epoch: 30 | loss: 0.1187710 speed: 1.1449s/iter; left time: 64148.4668s iters: 400, epoch: 30 | loss: 0.1147176 speed: 1.1366s/iter; left time: 63572.9679s iters: 500, epoch: 30 | loss: 0.1360426 speed: 1.1338s/iter; left time: 63304.7836s Epoch: 30 cost time: 836.4768686294556 Epoch: 30, Steps: 569 | Train Loss: 0.1222085 Vali Loss: 0.1158515 Test Loss: 0.1406534 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.009954477070092525 iters: 100, epoch: 31 | loss: 0.1412870 speed: 3.7973s/iter; left time: 211368.0744s iters: 200, epoch: 31 | loss: 0.1159413 speed: 1.1271s/iter; left time: 62623.4226s iters: 300, epoch: 31 | loss: 0.1257505 speed: 1.1407s/iter; left time: 63266.4546s iters: 400, epoch: 31 | loss: 0.1185998 speed: 1.1349s/iter; left time: 62834.2223s iters: 500, epoch: 31 | loss: 0.1407108 speed: 1.1227s/iter; left time: 62042.6172s Epoch: 31 cost time: 826.9412605762482 Epoch: 31, Steps: 569 | Train Loss: 0.1217030 Vali Loss: 0.1146795 Test Loss: 0.1355571 Validation loss decreased (0.138872 --> 0.138555). Saving model ... Updating learning rate to 0.009931496293745574 iters: 100, epoch: 32 | loss: 0.1284591 speed: 3.7371s/iter; left time: 205891.5067s iters: 200, epoch: 32 | loss: 0.1244491 speed: 1.1200s/iter; left time: 61592.2875s iters: 300, epoch: 32 | loss: 0.1287348 speed: 1.1250s/iter; left time: 61753.3241s iters: 400, epoch: 32 | loss: 0.1156265 speed: 1.1422s/iter; left time: 62587.8117s iters: 500, epoch: 32 | loss: 0.1143681 speed: 1.1391s/iter; left time: 62300.3049s Epoch: 32 cost time: 830.485196352005 Epoch: 32, Steps: 569 | Train Loss: 0.1215707 Vali Loss: 0.1191571 Test Loss: 0.1409026 EarlyStopping counter: 1 out of 15 Updating learning rate to 0.009903874184523402 iters: 100, epoch: 33 | loss: 0.1251111 speed: 3.7921s/iter; left time: 206766.9690s iters: 200, epoch: 33 | loss: 0.1226198 speed: 1.1444s/iter; left time: 62281.4583s iters: 300, epoch: 33 | loss: 0.1282259 speed: 1.1384s/iter; left time: 61843.6806s iters: 400, epoch: 33 | loss: 0.1163408 speed: 1.1391s/iter; left time: 61768.0832s iters: 500, epoch: 33 | loss: 0.1259831 speed: 1.1410s/iter; left time: 61756.3180s Epoch: 33 cost time: 837.5469632148743 Epoch: 33, Steps: 569 | Train Loss: 0.1200218 Vali Loss: 0.1201527 Test Loss: 0.1422589 EarlyStopping counter: 2 out of 15 Updating learning rate to 0.009871636739388375 iters: 100, epoch: 34 | loss: 0.1173344 speed: 3.7980s/iter; left time: 204922.5265s iters: 200, epoch: 34 | loss: 0.1316442 speed: 1.1288s/iter; left time: 60790.9463s iters: 300, epoch: 34 | loss: 0.1053582 speed: 1.1327s/iter; left time: 60889.2737s iters: 400, epoch: 34 | loss: 0.1193904 speed: 1.1202s/iter; left time: 60104.7573s iters: 500, epoch: 34 | loss: 0.1155428 speed: 1.1219s/iter; left time: 60085.5878s Epoch: 34 cost time: 826.464114189148 Epoch: 34, Steps: 569 | Train Loss: 0.1196060 Vali Loss: 0.1162458 Test Loss: 0.1405911 EarlyStopping counter: 3 out of 15 Updating learning rate to 0.009834814299095471 iters: 100, epoch: 35 | loss: 0.1159900 speed: 3.7653s/iter; left time: 201015.5638s iters: 200, epoch: 35 | loss: 0.1259470 speed: 1.1118s/iter; left time: 59242.7752s iters: 300, epoch: 35 | loss: 0.1054239 speed: 1.1265s/iter; left time: 59913.6273s iters: 400, epoch: 35 | loss: 0.1217524 speed: 1.1297s/iter; left time: 59973.9272s iters: 500, epoch: 35 | loss: 0.1293151 speed: 1.1325s/iter; left time: 60009.1380s Epoch: 35 cost time: 833.0298178195953 Epoch: 35, Steps: 569 | Train Loss: 0.1182280 Vali Loss: 0.1157054 Test Loss: 0.1397015 EarlyStopping counter: 4 out of 15 Updating learning rate to 0.00979344151963663 iters: 100, epoch: 36 | loss: 0.1082402 speed: 3.8603s/iter; left time: 203895.1430s iters: 200, epoch: 36 | loss: 0.1055611 speed: 1.1545s/iter; left time: 60862.1874s iters: 300, epoch: 36 | loss: 0.1189032 speed: 1.1503s/iter; left time: 60525.1116s iters: 400, epoch: 36 | loss: 0.1234522 speed: 1.1417s/iter; left time: 59958.0502s iters: 500, epoch: 36 | loss: 0.1142284 speed: 1.1351s/iter; left time: 59497.2293s Epoch: 36 cost time: 839.5045502185822 Epoch: 36, Steps: 569 | Train Loss: 0.1166528 Vali Loss: 0.1160769 Test Loss: 0.1409409 EarlyStopping counter: 5 out of 15 Updating learning rate to 0.00974755733962374 iters: 100, epoch: 37 | loss: 0.1153315 speed: 3.7988s/iter; left time: 198483.4486s iters: 200, epoch: 37 | loss: 0.1229952 speed: 1.1401s/iter; left time: 59454.6619s iters: 300, epoch: 37 | loss: 0.1108699 speed: 1.1306s/iter; left time: 58845.9782s iters: 400, epoch: 37 | loss: 0.1177943 speed: 1.1351s/iter; left time: 58968.7842s iters: 500, epoch: 37 | loss: 0.1190817 speed: 1.1344s/iter; left time: 58818.3932s Epoch: 37 cost time: 832.9059233665466 Epoch: 37, Steps: 569 | Train Loss: 0.1156981 Vali Loss: 0.1178569 Test Loss: 0.1419994 EarlyStopping counter: 6 out of 15 Updating learning rate to 0.009697204943640982 iters: 100, epoch: 38 | loss: 0.1276972 speed: 3.7777s/iter; left time: 195229.9105s iters: 200, epoch: 38 | loss: 0.1204563 speed: 1.1270s/iter; left time: 58130.4872s iters: 300, epoch: 38 | loss: 0.1175427 speed: 1.1236s/iter; left time: 57841.8129s iters: 400, epoch: 38 | loss: 0.1106953 speed: 1.1199s/iter; left time: 57540.0563s iters: 500, epoch: 38 | loss: 0.1014386 speed: 1.1220s/iter; left time: 57537.1089s Epoch: 38 cost time: 828.7733871936798 Epoch: 38, Steps: 569 | Train Loss: 0.1160038 Vali Loss: 0.1170599 Test Loss: 0.1412026 EarlyStopping counter: 7 out of 15 Updating learning rate to 0.009642431721601015 iters: 100, epoch: 39 | loss: 0.1168443 speed: 3.7986s/iter; left time: 194147.7026s iters: 200, epoch: 39 | loss: 0.1294523 speed: 1.1485s/iter; left time: 58584.5877s iters: 300, epoch: 39 | loss: 0.1090681 speed: 1.1357s/iter; left time: 57819.8470s iters: 400, epoch: 39 | loss: 0.1095421 speed: 1.1332s/iter; left time: 57577.7899s iters: 500, epoch: 39 | loss: 0.1296429 speed: 1.1455s/iter; left time: 58088.1493s Epoch: 39 cost time: 840.1748130321503 Epoch: 39, Steps: 569 | Train Loss: 0.1148623 Vali Loss: 0.1194504 Test Loss: 0.1422009 EarlyStopping counter: 8 out of 15 Updating learning rate to 0.009583289224143234 iters: 100, epoch: 40 | loss: 0.1109730 speed: 3.8569s/iter; left time: 194935.6053s iters: 200, epoch: 40 | loss: 0.1062282 speed: 1.1297s/iter; left time: 56984.4237s iters: 300, epoch: 40 | loss: 0.1315950 speed: 1.1213s/iter; left time: 56447.9984s iters: 400, epoch: 40 | loss: 0.1106779 speed: 1.1160s/iter; left time: 56070.1257s iters: 500, epoch: 40 | loss: 0.1079388 speed: 1.1207s/iter; left time: 56195.2142s Epoch: 40 cost time: 824.3057656288147 Epoch: 40, Steps: 569 | Train Loss: 0.1152440 Vali Loss: 0.1166266 Test Loss: 0.1412615 EarlyStopping counter: 9 out of 15 Updating learning rate to 0.009519833114116133 iters: 100, epoch: 41 | loss: 0.1173238 speed: 3.7279s/iter; left time: 186293.9348s iters: 200, epoch: 41 | loss: 0.1118353 speed: 1.1264s/iter; left time: 56176.7757s iters: 300, epoch: 41 | loss: 0.1118452 speed: 1.1205s/iter; left time: 55771.8775s iters: 400, epoch: 41 | loss: 0.1172003 speed: 1.1260s/iter; left time: 55932.6919s iters: 500, epoch: 41 | loss: 0.1048472 speed: 1.1322s/iter; left time: 56128.7458s Epoch: 41 cost time: 831.1303684711456 Epoch: 41, Steps: 569 | Train Loss: 0.1129380 Vali Loss: 0.1166707 Test Loss: 0.1425917 EarlyStopping counter: 10 out of 15 Updating learning rate to 0.009452123114189366 iters: 100, epoch: 42 | loss: 0.1119412 speed: 3.8204s/iter; left time: 188742.6803s iters: 200, epoch: 42 | loss: 0.1091013 speed: 1.1324s/iter; left time: 55831.6080s iters: 300, epoch: 42 | loss: 0.1103965 speed: 1.1441s/iter; left time: 56295.0167s iters: 400, epoch: 42 | loss: 0.1083918 speed: 1.1332s/iter; left time: 55643.2145s iters: 500, epoch: 42 | loss: 0.1068891 speed: 1.1351s/iter; left time: 55624.6720s Epoch: 42 cost time: 832.8055894374847 Epoch: 42, Steps: 569 | Train Loss: 0.1122377 Vali Loss: 0.1158231 Test Loss: 0.1420120 EarlyStopping counter: 11 out of 15 Updating learning rate to 0.009380222950644869 iters: 100, epoch: 43 | loss: 0.1044593 speed: 3.7866s/iter; left time: 184920.2291s iters: 200, epoch: 43 | loss: 0.1141709 speed: 1.1321s/iter; left time: 55173.6204s iters: 300, epoch: 43 | loss: 0.1073963 speed: 1.1445s/iter; left time: 55660.8742s iters: 400, epoch: 43 | loss: 0.1161310 speed: 1.1276s/iter; left time: 54728.2759s iters: 500, epoch: 43 | loss: 0.1060538 speed: 1.1213s/iter; left time: 54310.5074s Epoch: 43 cost time: 829.2817466259003 Epoch: 43, Steps: 569 | Train Loss: 0.1117694 Vali Loss: 0.1188707 Test Loss: 0.1432596 EarlyStopping counter: 12 out of 15 Updating learning rate to 0.009304200293399904 iters: 100, epoch: 44 | loss: 0.1143510 speed: 3.7405s/iter; left time: 180536.6860s iters: 200, epoch: 44 | loss: 0.1145173 speed: 1.1054s/iter; left time: 53244.1001s iters: 300, epoch: 44 | loss: 0.1076961 speed: 1.1026s/iter; left time: 52995.4158s iters: 400, epoch: 44 | loss: 0.1168368 speed: 1.1128s/iter; left time: 53376.7964s iters: 500, epoch: 44 | loss: 0.1042197 speed: 1.1145s/iter; left time: 53349.0329s Epoch: 44 cost time: 816.4163084030151 Epoch: 44, Steps: 569 | Train Loss: 0.1108055 Vali Loss: 0.1175551 Test Loss: 0.1425284 EarlyStopping counter: 13 out of 15 Updating learning rate to 0.009224126692318512 iters: 100, epoch: 45 | loss: 0.1124947 speed: 3.7410s/iter; left time: 178434.2272s iters: 200, epoch: 45 | loss: 0.1003242 speed: 1.1232s/iter; left time: 53461.9835s iters: 300, epoch: 45 | loss: 0.1141060 speed: 1.1231s/iter; left time: 53342.3200s iters: 400, epoch: 45 | loss: 0.1117448 speed: 1.1300s/iter; left time: 53556.5923s iters: 500, epoch: 45 | loss: 0.1143763 speed: 1.1155s/iter; left time: 52758.3714s Epoch: 45 cost time: 824.5720844268799 Epoch: 45, Steps: 569 | Train Loss: 0.1108459 Vali Loss: 0.1149000 Test Loss: 0.1405220 EarlyStopping counter: 14 out of 15 Updating learning rate to 0.009140077509871279 iters: 100, epoch: 46 | loss: 0.1117209 speed: 3.7541s/iter; left time: 176921.6896s iters: 200, epoch: 46 | loss: 0.1118171 speed: 1.1321s/iter; left time: 53239.7828s iters: 300, epoch: 46 | loss: 0.1136606 speed: 1.1223s/iter; left time: 52668.0477s iters: 400, epoch: 46 | loss: 0.1194230 speed: 1.1199s/iter; left time: 52443.1803s iters: 500, epoch: 46 | loss: 0.1014266 speed: 1.1368s/iter; left time: 53118.3237s Epoch: 46 cost time: 831.8894190788269 Epoch: 46, Steps: 569 | Train Loss: 0.1105391 Vali Loss: 0.1172196 Test Loss: 0.1433370 EarlyStopping counter: 15 out of 15 Early stopping testing : long_term_forecast_ECL_96_96_none_TimeMixerPP_custom_sl96_pl96_dm16_nh8_el3_dl1_df64_fc3_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< test 5165 test shape: (5165, 96, 321) (5165, 96, 321) test shape: (5165, 96, 321) (5165, 96, 321) mse:0.1352925808429718, mae:0.23428012430667877
Thank you very much for your valuable work and contributions. I would appreciate it if you could kindly inform me when the training code will be made publicly available. Your efforts are sincerely appreciated, and I look forward to your response.