Skip to main content

2024 | OriginalPaper | Buchkapitel

Hyperparameter Tuning MLP’s for Probabilistic Time Series Forecasting

verfasst von : Kiran Madhusudhanan, Shayan Jawed, Lars Schmidt-Thieme

Erschienen in: Advances in Knowledge Discovery and Data Mining

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Time series forecasting attempts to predict future events by analyzing past trends and patterns. Although well researched, certain critical aspects pertaining to the use of deep learning in time series forecasting remain ambiguous. Our research primarily focuses on examining the impact of specific hyperparameters related to time series, such as context length and validation strategy, on the performance of the state-of-the-art MLP model in time series forecasting. We have conducted a comprehensive series of experiments involving 4800 configurations per dataset across 20 time series forecasting datasets, and our findings demonstrate the importance of tuning these parameters. Furthermore, in this work, we introduce the largest metadataset for time series forecasting to date, named TSBench, comprising 97200 evaluations, which is a twentyfold increase compared to previous works in the field. Finally, we demonstrate the utility of the created metadataset on multi-fidelity hyperparameter optimization tasks.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Alexandrov, A., et al.: Gluonts: probabilistic time series models in python. ArXiv (2019) Alexandrov, A., et al.: Gluonts: probabilistic time series models in python. ArXiv (2019)
2.
Zurück zum Zitat Arango, S.P., Jomaa, H.S., Wistuba, M., Grabocka, J.: Hpo-b: a large-scale reproducible benchmark for black-box hpo based on openml. In: NeurIPS Datasets and Benchmarks Track (2021) Arango, S.P., Jomaa, H.S., Wistuba, M., Grabocka, J.: Hpo-b: a large-scale reproducible benchmark for black-box hpo based on openml. In: NeurIPS Datasets and Benchmarks Track (2021)
3.
Zurück zum Zitat Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., Gunnemann, S.: Multi-objective model selection for time series forecasting. ArXiv (2022) Borchert, O., Salinas, D., Flunkert, V., Januschowski, T., Gunnemann, S.: Multi-objective model selection for time series forecasting. ArXiv (2022)
4.
Zurück zum Zitat Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: ICLR (2015) Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). In: ICLR (2015)
6.
Zurück zum Zitat Falkner, S., Klein, A., Hutter, F.: Bohb: robust and efficient hyperparameter optimization at scale. In: ICML, pp. 1437–1446. PMLR (2018) Falkner, S., Klein, A., Hutter, F.: Bohb: robust and efficient hyperparameter optimization at scale. In: ICML, pp. 1437–1446. PMLR (2018)
7.
Zurück zum Zitat Godahewa, R., Bergmeir, C., Webb, G.I., Hyndman, R.J., Montero-Manso, P.: Monash time series forecasting archive. In: NeurIPS Datasets and Benchmarks (2021) Godahewa, R., Bergmeir, C., Webb, G.I., Hyndman, R.J., Montero-Manso, P.: Monash time series forecasting archive. In: NeurIPS Datasets and Benchmarks (2021)
8.
Zurück zum Zitat Jawed, S., Jomaa, H., Schmidt-Thieme, L., Grabocka, J.: Multi-task learning curve forecasting across hyperparameter configurations and datasets. In: ECML PKDD, pp. 485–501 (2021) Jawed, S., Jomaa, H., Schmidt-Thieme, L., Grabocka, J.: Multi-task learning curve forecasting across hyperparameter configurations and datasets. In: ECML PKDD, pp. 485–501 (2021)
9.
Zurück zum Zitat Jomaa, H.S., Schmidt-Thieme, L., Grabocka, J.: Dataset2vec: learning dataset meta-features. Data Min. Knowl. Disc. 35, 964–985 (2021)MathSciNetCrossRef Jomaa, H.S., Schmidt-Thieme, L., Grabocka, J.: Dataset2vec: learning dataset meta-features. Data Min. Knowl. Disc. 35, 964–985 (2021)MathSciNetCrossRef
10.
Zurück zum Zitat Kadra, A., Lindauer, M., Hutter, F., Grabocka, J.: Well-tuned simple nets excel on tabular datasets. In: NeurIPS, vol. 34, pp. 23928–23941 (2021) Kadra, A., Lindauer, M., Hutter, F., Grabocka, J.: Well-tuned simple nets excel on tabular datasets. In: NeurIPS, vol. 34, pp. 23928–23941 (2021)
11.
Zurück zum Zitat Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: a novel bandit-based approach to hyperparameter optimization. JMLR 18(1), 1–52 (2017)MathSciNet Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: a novel bandit-based approach to hyperparameter optimization. JMLR 18(1), 1–52 (2017)MathSciNet
12.
Zurück zum Zitat Lindauer, M., et al.: Smac3: a versatile bayesian optimization package for hyperparameter optimization. JMLR 23(54), 1–9 (2022)MathSciNet Lindauer, M., et al.: Smac3: a versatile bayesian optimization package for hyperparameter optimization. JMLR 23(54), 1–9 (2022)MathSciNet
13.
Zurück zum Zitat Madhusudhanan, K., Burchert, J., Duong-Trung, N., Born, S., Schmidt-Thieme, L.: U-net inspired transformer architecture for far horizon time series forecasting. In: ECML/PKDD (2021) Madhusudhanan, K., Burchert, J., Duong-Trung, N., Born, S., Schmidt-Thieme, L.: U-net inspired transformer architecture for far horizon time series forecasting. In: ECML/PKDD (2021)
14.
Zurück zum Zitat Nie, Y., Nguyen, N.H., Sinthong, P., Kalagnanam, J.: A time series is worth 64 words: long-term forecasting with transformers. In: ICLR (2023) Nie, Y., Nguyen, N.H., Sinthong, P., Kalagnanam, J.: A time series is worth 64 words: long-term forecasting with transformers. In: ICLR (2023)
15.
Zurück zum Zitat Oreshkin, B.N., Carpov, D., Chapados, N., Bengio, Y.: N-BEATS: neural basis expansion analysis for interpretable time series forecasting. In: ICLR (2020) Oreshkin, B.N., Carpov, D., Chapados, N., Bengio, Y.: N-BEATS: neural basis expansion analysis for interpretable time series forecasting. In: ICLR (2020)
16.
Zurück zum Zitat Rasul, K., Sheikh, A.S., Schuster, I., Bergmann, U.M., Vollgraf, R.: Multivariate probabilistic time series forecasting via conditioned normalizing flows. In: ICLR (2021) Rasul, K., Sheikh, A.S., Schuster, I., Bergmann, U.M., Vollgraf, R.: Multivariate probabilistic time series forecasting via conditioned normalizing flows. In: ICLR (2021)
17.
Zurück zum Zitat Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: Deepar: probabilistic forecasting with autoregressive recurrent networks. JMLR 36(3), 1181–1191 (2020) Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: Deepar: probabilistic forecasting with autoregressive recurrent networks. JMLR 36(3), 1181–1191 (2020)
18.
Zurück zum Zitat Shah, S.Y., et al.: Autoai-ts: autoai for time series forecasting. In: SIGMOD, pp. 2584–2596 (2021) Shah, S.Y., et al.: Autoai-ts: autoai for time series forecasting. In: SIGMOD, pp. 2584–2596 (2021)
19.
Zurück zum Zitat Ullah, I., et al.: Meta-album: multi-domain meta-dataset for few-shot image classification. In: NeurIPS, vol. 35, pp. 3232–3247 (2022) Ullah, I., et al.: Meta-album: multi-domain meta-dataset for few-shot image classification. In: NeurIPS, vol. 35, pp. 3232–3247 (2022)
20.
Zurück zum Zitat Wu, H., Xu, J., Wang, J., Long, M.: Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. In: NeurIPS, vol. 34, pp. 22419–22430 (2021) Wu, H., Xu, J., Wang, J., Long, M.: Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. In: NeurIPS, vol. 34, pp. 22419–22430 (2021)
21.
Zurück zum Zitat Zeng, A., Chen, M., Zhang, L., Xu, Q.: Are transformers effective for time series forecasting? In: AAAI (2023) Zeng, A., Chen, M., Zhang, L., Xu, Q.: Are transformers effective for time series forecasting? In: AAAI (2023)
22.
Zurück zum Zitat Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. In: AAAI, vol. 35, pp. 11106–11115 (2021) Zhou, H., et al.: Informer: beyond efficient transformer for long sequence time-series forecasting. In: AAAI, vol. 35, pp. 11106–11115 (2021)
23.
Zurück zum Zitat Zimmer, L., Lindauer, M., Hutter, F.: Auto-pytorch tabular: multi-fidelity metalearning for efficient and robust autodl. IEEE Trans. Pattern Anal. Mach. Intell. 43(9), 3079–3090 (2021)CrossRef Zimmer, L., Lindauer, M., Hutter, F.: Auto-pytorch tabular: multi-fidelity metalearning for efficient and robust autodl. IEEE Trans. Pattern Anal. Mach. Intell. 43(9), 3079–3090 (2021)CrossRef
Metadaten
Titel
Hyperparameter Tuning MLP’s for Probabilistic Time Series Forecasting
verfasst von
Kiran Madhusudhanan
Shayan Jawed
Lars Schmidt-Thieme
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-2266-2_21

Premium Partner