[FIX] Typo in initial config for TimeSeries's TFT
Fixed a typo in the initial configs for Time Series forecasting (TemporalFusionTransformer configspace).
Types of changes
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
Note that a Pull Request should only contain one of refactoring, new features or documentation changes. Please separate these changes and send us individual PRs for each. For more information on how to create a good pull request, please refer to The anatomy of a perfect pull request.
Checklist:
- [x] My code follows the code style of this project.
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [x] Have you checked to ensure there aren't other open Pull Requests for the same update/change?
- [x] Have you added an explanation of what your changes do and why you'd like us to include them?
- [ ] Have you written new tests for your core changes, as applicable?
- [x] Have you successfully ran tests with your changes locally?
Description
Fixed a typo in the initial configs for Time Series forecasting (TemporalFusionTransformer configspace).
Motivation and Context
Allows the correct namespace for the config space creation.
How has this been tested?
Something else might be missing.
autoPyTorch/pipeline/time_series_forecasting.py, line 298
if transform_time_features in cs:
this CRASHES with the following error message:
Hyperparameter data_loader:transform_time_features not found in space. Configuration space object: Hyperparameters: data_loader:backcast, Type: Categorical, Choices: {True, False}, Default: False data_loader:backcast_period, Type: UniformInteger, Range: [1, 7], Default: 2 data_loader:batch_size, Type: UniformInteger, Range: [32, 320], Default: 64 data_loader:num_batches_per_epoch, Type: UniformInteger, Range: [30, 100], Default: 50 data_loader:sample_strategy, Type: Categorical, Choices: {LengthUniform, SeqUniform}, Default: SeqUniform data_loader:window_size, Type: UniformInteger, Range: [12, 36], Default: 15 feature_encoding:choice, Type: Categorical, Choices: {NoEncoder}, Default: NoEncoder loss:DistributionLoss:aggregation, Type: Categorical, Choices: {mean, median}, Default: mean loss:DistributionLoss:dist_cls, Type: Categorical, Choices: {studentT, normal}, Default: studentT loss:DistributionLoss:forecast_strategy, Type: Categorical, Choices: {sample, mean}, Default: sample loss:DistributionLoss:num_samples, Type: UniformInteger, Range: [50, 200], Default: 100 loss:QuantileLoss:lower_quantile, Type: UniformFloat, Range: [0.0, 0.4], Default: 0.1 loss:QuantileLoss:upper_quantile, Type: UniformFloat, Range: [0.6, 1.0], Default: 0.9 loss:RegressionLoss:loss_name, Type: Categorical, Choices: {l1, mse, mase, mape}, Default: mse loss:choice, Type: Categorical, Choices: {DistributionLoss, QuantileLoss, RegressionLoss}, Default: DistributionLoss ...
If I change it to:
if transform_time_features in cs.keys():
no Exception is thrown and fit/predict works.