Sign in to use this feature.

Years

Between: -

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (185)

Search Parameters:
Journal = Forecasting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
Article
Large Language Models: Their Success and Impact
Forecasting 2023, 5(3), 536-549; https://doi.org/10.3390/forecast5030030 - 25 Aug 2023
Viewed by 240
Abstract
ChatGPT, a state-of-the-art large language model (LLM), is revolutionizing the AI field by exhibiting humanlike skills in a range of tasks that include understanding and answering natural language questions, translating languages, writing code, passing professional exams, and even composing poetry, among its other [...] Read more.
ChatGPT, a state-of-the-art large language model (LLM), is revolutionizing the AI field by exhibiting humanlike skills in a range of tasks that include understanding and answering natural language questions, translating languages, writing code, passing professional exams, and even composing poetry, among its other abilities. ChatGPT has gained an immense popularity since its launch, amassing 100 million active monthly users in just two months, thereby establishing itself as the fastest-growing consumer application to date. This paper discusses the reasons for its success as well as the future prospects of similar large language models (LLMs), with an emphasis on their potential impact on forecasting, a specialized and domain-specific field. This is achieved by first comparing the correctness of the answers of the standard ChatGPT and a custom one, trained using published papers from a subfield of forecasting where the answers to the questions asked are known, allowing us to determine their correctness compared to those of the two ChatGPT versions. Then, we also compare the responses of the two versions on how judgmental adjustments to the statistical/ML forecasts should be applied by firms to improve their accuracy. The paper concludes by considering the future of LLMs and their impact on all aspects of our life and work, as well as on the field of forecasting specifically. Finally, the conclusion section is generated by ChatGPT, which was provided with a condensed version of this paper and asked to write a four-paragraph conclusion. Full article
Show Figures

Figure 1

Article
Shrinking the Variance in Experts’ “Classical” Weights Used in Expert Judgment Aggregation
Forecasting 2023, 5(3), 522-535; https://doi.org/10.3390/forecast5030029 - 23 Aug 2023
Viewed by 227
Abstract
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and [...] Read more.
Mathematical aggregation of probabilistic expert judgments often involves weighted linear combinations of experts’ elicited probability distributions of uncertain quantities. Experts’ weights are commonly derived from calibration experiments based on the experts’ performance scores, where performance is evaluated in terms of the calibration and the informativeness of the elicited distributions. This is referred to as Cooke’s method, or the classical model (CM), for aggregating probabilistic expert judgments. The performance scores are derived from experiments, so they are uncertain and, therefore, can be represented by random variables. As a consequence, the experts’ weights are also random variables. We focus on addressing the underlying uncertainty when calculating experts’ weights to be used in a mathematical aggregation of expert elicited distributions. This paper investigates the potential of applying an empirical Bayes development of the James–Stein shrinkage estimation technique on the CM’s weights to derive shrinkage weights with reduced mean squared errors. We analyze 51 professional CM expert elicitation studies. We investigate the differences between the classical and the (new) shrinkage CM weights and the benefits of using the new weights. In theory, the outcome of a probabilistic model using the shrinkage weights should be better than that obtained when using the classical weights because shrinkage estimation techniques reduce the mean squared errors of estimators in general. In particular, the empirical Bayes shrinkage method used here reduces the assigned weights for those experts with larger variances in the corresponding sampling distributions of weights in the experiment. We measure improvement of the aggregated judgments in a cross-validation setting using two studies that can afford such an approach. Contrary to expectations, the results are inconclusive. However, in practice, we can use the proposed shrinkage weights to increase the reliability of derived weights when only small-sized experiments are available. We demonstrate the latter on 49 post-2006 professional CM expert elicitation studies. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2023)
Show Figures

Figure 1

Article
A Hybrid Model for Multi-Day-Ahead Electricity Price Forecasting considering Price Spikes
Forecasting 2023, 5(3), 499-521; https://doi.org/10.3390/forecast5030028 - 19 Jul 2023
Viewed by 656
Abstract
This paper proposes a new hybrid model to forecast electricity market prices up to four days ahead. The components of the proposed model are combined in two dimensions. First, on the “vertical” dimension, long short-term memory (LSTM) neural networks and extreme gradient boosting [...] Read more.
This paper proposes a new hybrid model to forecast electricity market prices up to four days ahead. The components of the proposed model are combined in two dimensions. First, on the “vertical” dimension, long short-term memory (LSTM) neural networks and extreme gradient boosting (XGBoost) models are stacked up to produce supplementary price forecasts. The final forecasts are then picked depending on how the predictions compare to a price spike threshold. On the “horizontal” dimension, five models are designed to extend the forecasting horizon to four days. This is an important requirement to make forecasts useful for market participants who trade energy and ancillary services multiple days ahead. The horizontally cascaded models take advantage of the availability of specific public data for each forecasting horizon. To enhance the forecasting capability of the model in dealing with price spikes, we deploy a previously unexplored input in the proposed methodology. That is, to use the recent variations in the output power of thermal units as an indicator of unplanned outages or shift in the supply stack. The proposed method is tested using data from Alberta’s electricity market, which is known for its volatility and price spikes. An economic application of the developed forecasting model is also carried out to demonstrate how several market players in the Alberta electricity market can benefit from the proposed multi-day ahead price forecasting model. The numerical results demonstrate that the proposed methodology is effective in enhancing forecasting accuracy and price spike detection. Full article
(This article belongs to the Collection Energy Forecasting)
Show Figures

Figure 1

Article
On the Disagreement of Forecasting Model Selection Criteria
Forecasting 2023, 5(2), 487-498; https://doi.org/10.3390/forecast5020027 - 20 Jun 2023
Viewed by 618
Abstract
Forecasters have been using various criteria to select the most appropriate model from a pool of candidate models. This includes measurements on the in-sample accuracy of the models, information criteria, and cross-validation, among others. Although the latter two options are generally preferred due [...] Read more.
Forecasters have been using various criteria to select the most appropriate model from a pool of candidate models. This includes measurements on the in-sample accuracy of the models, information criteria, and cross-validation, among others. Although the latter two options are generally preferred due to their ability to tackle overfitting, in univariate time-series forecasting settings, limited work has been conducted to confirm their superiority. In this study, we compared such popular criteria for the case of the exponential smoothing family of models using a large data set of real series. Our results suggest that there is significant disagreement between the suggestions of the examined criteria and that, depending on the approach used, models of different complexity may be favored, with possible negative effects on the forecasting accuracy. Moreover, we find that simple in-sample error measures can effectively select forecasting models, especially when focused on the most recent observations in the series. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2023)
Show Figures

Figure 1

Article
Comparative Analysis of Machine Learning, Hybrid, and Deep Learning Forecasting Models: Evidence from European Financial Markets and Bitcoins
Forecasting 2023, 5(2), 472-486; https://doi.org/10.3390/forecast5020026 - 20 Jun 2023
Viewed by 800
Abstract
This study analyzes the transmission of market uncertainty on key European financial markets and the cryptocurrency market over an extended period, encompassing the pre-, during, and post-pandemic periods. Daily financial market indices and price observations are used to assess the forecasting models. We [...] Read more.
This study analyzes the transmission of market uncertainty on key European financial markets and the cryptocurrency market over an extended period, encompassing the pre-, during, and post-pandemic periods. Daily financial market indices and price observations are used to assess the forecasting models. We compare statistical, machine learning, and deep learning forecasting models to evaluate the financial markets, such as the ARIMA, hybrid ETS-ANN, and kNN predictive models. The study results indicate that predicting financial market fluctuations is challenging, and the accuracy levels are generally low in several instances. ARIMA and hybrid ETS-ANN models perform better over extended periods compared to the kNN model, with ARIMA being the best-performing model in 2018–2021 and the hybrid ETS-ANN model being the best-performing model in most of the other subperiods. Still, the kNN model outperforms the others in several periods, depending on the observed accuracy measure. Researchers have advocated using parametric and non-parametric modeling combinations to generate better results. In this study, the results suggest that the hybrid ETS-ANN model is the best-performing model despite its moderate level of accuracy. Thus, the hybrid ETS-ANN model is a promising financial time series forecasting approach. The findings offer financial analysts an additional source that can provide valuable insights for investment decisions. Full article
(This article belongs to the Special Issue Forecasting Financial Time Series during Turbulent Times)
Show Figures

Figure 1

Article
Distribution Prediction of Decomposed Relative EVA Measure with Levy-Driven Mean-Reversion Processes: The Case of an Automotive Sector of a Small Open Economy
Forecasting 2023, 5(2), 453-471; https://doi.org/10.3390/forecast5020025 - 29 May 2023
Viewed by 730
Abstract
The paper is focused on predicting the financial performance of a small open economy with an automotive industry with an above-standard share. The paper aims to predict the probability distribution of the decomposed relative economic value-added measure of the automotive production sector NACE [...] Read more.
The paper is focused on predicting the financial performance of a small open economy with an automotive industry with an above-standard share. The paper aims to predict the probability distribution of the decomposed relative economic value-added measure of the automotive production sector NACE 29 in the Czech economy. An advanced Monte Carlo simulation prediction model is applied using the exact pyramid decomposition function. The problem is modelled using advanced stochastic process instruments such as Levy-driven mean-reversion, skew t-regression, normal inverse Gaussian distribution, and t-copula interdependencies. The proposed method procedure was found to fit the investigated financial ratios sufficiently, and the estimation was valid. The decomposed approach allows the reflection of the ratios’ complex relationships and improves the prediction results. The decomposed results are compared with the direct prediction. Precision distribution tests confirmed the superiority of the decomposed approach for particular data. Moreover, the Czech automotive sector tends to decrease the mean value and median of financial performance in the future with negative asymmetry and high volatility hidden in financial ratios decomposition. Scholars can generally use forecasting methods to investigate economic system development, and practitioners can obtain quality and valuable information for decision making. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

Article
Solving Linear Integer Models with Variable Bounding
Forecasting 2023, 5(2), 443-452; https://doi.org/10.3390/forecast5020024 - 05 May 2023
Viewed by 1913
Abstract
We present a technique to solve the linear integer model with variable bounding. By using the continuous optimal solution of the linear integer model, the variable bounds for the basic variables are approximated and then used to calculate the optimal integer solution. With [...] Read more.
We present a technique to solve the linear integer model with variable bounding. By using the continuous optimal solution of the linear integer model, the variable bounds for the basic variables are approximated and then used to calculate the optimal integer solution. With the variable bounds of the basic variables known, solving a linear integer model is easier by using either the branch and bound, branch and cut, branch and price, branch cut and price, or branch cut and free algorithms. Thus, the search for large numbers of subproblems, which are unnecessary and common for NP Complete linear integer models, is avoided. Full article
(This article belongs to the Section Forecasting in Computer Science)
Article
Automation in Regional Economic Synthetic Index Construction with Uncertainty Measurement
Forecasting 2023, 5(2), 424-442; https://doi.org/10.3390/forecast5020023 - 19 Apr 2023
Viewed by 1013
Abstract
Subnational jurisdictions, compared to the apparatuses of countries and large institutions, have less resources and human capital available to carry out an updated conjunctural follow-up of the economy (nowcasting) and for generating economic predictions (forecasting). This paper presents the results of our research [...] Read more.
Subnational jurisdictions, compared to the apparatuses of countries and large institutions, have less resources and human capital available to carry out an updated conjunctural follow-up of the economy (nowcasting) and for generating economic predictions (forecasting). This paper presents the results of our research aimed at facilitating the economic decision making of regional public agents. On the one hand, we present an interactive app that, based on dynamic factor analysis, simplifies and automates the construction of economic synthetic indicators and, on the other hand, we evaluate how to measure the uncertainty associated with the synthetic indicator. Theoretical and empirical developments show the suitability of the methodology and the approach for measuring and predicting the underlying aggregate evolution of the economy and, given the complexity associated with the dynamic factor analysis methodology, for using bootstrap techniques to measure the error. We also show that, when we combine different economic series by dynamic factor analysis, approximately 1000 resamples is sufficient to properly calculate the confidence intervals of the synthetic index in the different time instants. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

Article
Projected Future Flooding Pattern of Wabash River in Indiana and Fountain Creek in Colorado: An Assessment Utilizing Bias-Corrected CMIP6 Climate Data
Forecasting 2023, 5(2), 405-423; https://doi.org/10.3390/forecast5020022 - 17 Apr 2023
Viewed by 1110
Abstract
Climate change is considered one of the biggest challenges around the globe as it has been causing alterations in hydrological extremes. Climate change and variability have an impact on future streamflow conditions, water quality, and ecological balance, which are further aggravated by anthropogenic [...] Read more.
Climate change is considered one of the biggest challenges around the globe as it has been causing alterations in hydrological extremes. Climate change and variability have an impact on future streamflow conditions, water quality, and ecological balance, which are further aggravated by anthropogenic activities such as changes in land use. This study intends to provide insight into potential changes in future streamflow conditions leading to changes in flooding patterns. Flooding is an inevitable, frequently occurring natural event that affects the environment and the socio-economic structure of its surroundings. This study evaluates the flooding pattern and inundation mapping of two different rivers, Wabash River in Indiana and Fountain Creek in Colorado, using the observed gage data and different climate models. The Coupled Model Intercomparison Project Phase 6 (CMIP6) streamflow data are considered for the future forecast of the flood. The cumulative distribution function transformation (CDF-t) method is used to correct bias in the CMIP6 streamflow data. The Generalized Extreme Value (L-Moment) method is used for the estimation of the frequency of flooding for 100-year and 500-year return periods. Civil GeoHECRAS is used for each flood event to map flood extent and examine flood patterns. The findings from this study show that there will be a rapid increase in flooding events even in small creeks soon in the upcoming years. This study seeks to assist floodplain managers in strategic planning to adopt state-of-the-art information and provide a sustainable strategy to regions with similar difficulties for floodplain management, to improve socioeconomic life, and to promote environmental sustainability. Full article
(This article belongs to the Section Weather and Forecasting)
Show Figures

Figure 1

Article
Short-Term Probabilistic Load Forecasting in University Buildings by Means of Artificial Neural Networks
Forecasting 2023, 5(2), 390-404; https://doi.org/10.3390/forecast5020021 - 13 Apr 2023
Cited by 1 | Viewed by 906
Abstract
Understanding how, why and when energy consumption changes provides a tool for decision makers throughout the power networks. Thus, energy forecasting provides a great service. This research proposes a probabilistic approach to capture the five inherent dimensions of a forecast: three dimensions in [...] Read more.
Understanding how, why and when energy consumption changes provides a tool for decision makers throughout the power networks. Thus, energy forecasting provides a great service. This research proposes a probabilistic approach to capture the five inherent dimensions of a forecast: three dimensions in space, time and probability. The forecasts are generated through different models based on artificial neural networks as a post-treatment of point forecasts based on shallow artificial neural networks, creating a dynamic ensemble. The singular value decomposition (SVD) technique is then used herein to generate temperature scenarios and project different futures for the probabilistic forecast. In additional to meteorological conditions, time and recency effects were considered as predictor variables. Buildings that are part of a university campus are used as a case study. Though this methodology was applied to energy demand forecasts in buildings alone, it can easily be extended to energy communities as well. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2022)
Show Figures

Figure 1

Article
Predicting the Oil Price Movement in Commodity Markets in Global Economic Meltdowns
Forecasting 2023, 5(2), 374-389; https://doi.org/10.3390/forecast5020020 - 27 Mar 2023
Viewed by 1521
Abstract
The price of oil is nowadays a hot topic as it affects many areas of the world economy. The price of oil also plays an essential role in how the economic situation is currently developing (such as the COVID-19 pandemic, inflation and others) [...] Read more.
The price of oil is nowadays a hot topic as it affects many areas of the world economy. The price of oil also plays an essential role in how the economic situation is currently developing (such as the COVID-19 pandemic, inflation and others) or the political situation in surrounding countries. The paper aims to predict the oil price movement in stock markets and to what extent the COVID-19 pandemic has affected stock markets. The experiment measures the price of oil from 2000 to 2022. Time-series-smoothing techniques for calculating the results involve multilayer perceptron (MLP) networks and radial basis function (RBF) neural networks. Statistica 13 software, version 13.0 forecasts the oil price movement. MLP networks deliver better performance than RBF networks and are applicable in practice. The results showed that the correlation coefficient values of all neural structures and data sets were higher than 0.973 in all cases, indicating only minimal differences between neural networks. Therefore, we must validate the prediction for the next 20 trading days. After the validation, the first neural network (10 MLP 1-18-1) closest to zero came out as the best. This network should be further trained on more data in the future, to refine the results. Full article
(This article belongs to the Section Forecasting in Economics and Management)
Show Figures

Figure 1

Article
Agricultural Commodities in the Context of the Russia-Ukraine War: Evidence from Corn, Wheat, Barley, and Sunflower Oil
Forecasting 2023, 5(1), 351-373; https://doi.org/10.3390/forecast5010019 - 22 Mar 2023
Viewed by 2847
Abstract
The Russian invasion of Ukraine on 24 February 2022 accelerated agricultural commodity prices and raised food insecurities worldwide. Ukraine and Russia are the leading global suppliers of wheat, corn, barley and sunflower oil. For this purpose, we investigated the relationship among these four [...] Read more.
The Russian invasion of Ukraine on 24 February 2022 accelerated agricultural commodity prices and raised food insecurities worldwide. Ukraine and Russia are the leading global suppliers of wheat, corn, barley and sunflower oil. For this purpose, we investigated the relationship among these four agricultural commodities and, at the same time, predicted their future performance. The series covers the period from 1 January 1990 to 1 August 2022, based on monthly frequencies. The VAR impulse response function, variance decomposition, Granger Causality Test and vector error correction model were used to analyze relationships between variables. The results indicate that corn prices are an integral part of price changes in wheat, barley and sunflower oil. Wheat prices are also essential but with a weaker influence than that of corn. The additional purpose of this study was to forecast their price changes ten months ahead. The Vector Autoregressive (VAR) and Vector Error Correction (VECM) fanchart estimates an average price decline in corn, wheat, barley and sunflower oil in the range of 10%. From a policy perspective, the findings provide reliable signals for countries exposed to food insecurities and inflationary risk. Recognizing the limitations that predictions maintain, the results provide modest signals for relevant agencies, international regulatory authorities, retailers and low-income countries. Moreover, stakeholders can become informed about their price behavior and the causal relationship they hold with each other. Full article
(This article belongs to the Special Issue Economic Forecasting in Agriculture)
Show Figures

Figure 1

Article
Methodology for Optimizing Factors Affecting Road Accidents in Poland
Forecasting 2023, 5(1), 336-350; https://doi.org/10.3390/forecast5010018 - 07 Mar 2023
Cited by 1 | Viewed by 1262
Abstract
With the rapid increase in the number of vehicles on the road, traffic accidents have become a rapidly growing threat, causing the loss of human life and economic assets. The reason for this is the rapid growth of the human population and the [...] Read more.
With the rapid increase in the number of vehicles on the road, traffic accidents have become a rapidly growing threat, causing the loss of human life and economic assets. The reason for this is the rapid growth of the human population and the development of motorization. The main challenge in predicting and analyzing traffic accident data is the small size of the dataset that can be used for analysis in this regard. While traffic accidents cause, globally, millions of deaths and injuries each year, their density in time and space is low. The purpose of this article is to present a methodology for determining the role of factors influencing road accidents in Poland. For this purpose, multi-criteria optimization methods were used. The results obtained allow us to conclude that the proposed solution can be used to search for the best solution for the selection of factors affecting traffic accidents. Furthermore, based on the study, it can be concluded that the factors primarily influencing traffic accidents are weather conditions (fog, smoke, rainfall, snowfall, hail, or cloud cover), province (Lower Silesian, Lubelskie, Lodzkie, Malopolskie, Mazovian, Opolskie, Podkarpackie, Pomeranian, Silesian, Warmian-Masurian, and Greater Poland), and type of road (with two one-way carriageways; two-way, single carriageway road). Noteworthy is the fact that all days of the week also affect the number of vehicle accidents, although most of them occur on Fridays. Full article
Show Figures

Figure 1

Article
Time Series Dataset Survey for Forecasting with Deep Learning
Forecasting 2023, 5(1), 315-335; https://doi.org/10.3390/forecast5010017 - 03 Mar 2023
Viewed by 2720
Abstract
Deep learning models have revolutionized research fields like computer vision and natural language processing by outperforming traditional models in multiple tasks. However, the field of time series analysis, especially time series forecasting, has not seen a similar revolution, despite forecasting being one of [...] Read more.
Deep learning models have revolutionized research fields like computer vision and natural language processing by outperforming traditional models in multiple tasks. However, the field of time series analysis, especially time series forecasting, has not seen a similar revolution, despite forecasting being one of the most prominent tasks of predictive data analytics. One crucial problem for time series forecasting is the lack of large, domain-independent benchmark datasets and a competitive research environment, e.g., annual large-scale challenges, that would spur the development of new models, as was the case for CV and NLP. Furthermore, the focus of time series forecasting research is primarily domain-driven, resulting in many highly individual and domain-specific datasets. Consequently, the progress in the entire field is slowed down due to a lack of comparability across models trained on a single benchmark dataset and on a variety of different forecasting challenges. In this paper, we first explore this problem in more detail and derive the need for a comprehensive, domain-unspecific overview of the state-of-the-art of commonly used datasets for prediction tasks. In doing so, we provide an overview of these datasets and improve comparability in time series forecasting by introducing a method to find similar datasets which can be utilized to test a newly developed model. Ultimately, our survey paves the way towards developing a single widely used and accepted benchmark dataset for time series data, built on the various frequently used datasets surveyed in this paper. Full article
(This article belongs to the Section Forecasting in Computer Science)
Show Figures

Figure 1

Article
Day Ahead Electric Load Forecast: A Comprehensive LSTM-EMD Methodology and Several Diverse Case Studies
Forecasting 2023, 5(1), 297-314; https://doi.org/10.3390/forecast5010016 - 02 Mar 2023
Cited by 2 | Viewed by 1749
Abstract
Optimal behind-the-meter energy management often requires a day-ahead electric load forecast capable of learning non-linear and non-stationary patterns, due to the spatial disaggregation of loads and concept drift associated with time-varying physics and behavior. There are many promising machine learning techniques in the [...] Read more.
Optimal behind-the-meter energy management often requires a day-ahead electric load forecast capable of learning non-linear and non-stationary patterns, due to the spatial disaggregation of loads and concept drift associated with time-varying physics and behavior. There are many promising machine learning techniques in the literature, but black box models lack explainability and therefore confidence in the models’ robustness can’t be achieved without thorough testing on data sets with varying and representative statistical properties. Therefore this work adopts and builds on some of the highest-performing load forecasting tools in the literature, which are Long Short-Term Memory recurrent networks, Empirical Mode Decomposition for feature engineering, and k-means clustering for outlier detection, and tests a combined methodology on seven different load data sets from six different load sectors. Forecast test set results are benchmarked against a seasonal naive model and SARIMA. The resultant skill scores range from −6.3% to 73%, indicating that the methodology adopted is often but not exclusively effective relative to the benchmarks. Full article
(This article belongs to the Special Issue Feature Papers of Forecasting 2023)
Show Figures

Figure 1

Back to TopTop