Journal Description
Econometrics
Econometrics
is an international, peer-reviewed, open access journal on econometric modeling and forecasting, as well as new advances in econometrics theory, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), EconLit, EconBiz, RePEc, and other databases.
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 65.1 days after submission; acceptance to publication is undertaken in 5 days (median values for papers published in this journal in the first half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
1.5 (2022);
5-Year Impact Factor:
1.7 (2022)
Latest Articles
Detecting Pump-and-Dumps with Crypto-Assets: Dealing with Imbalanced Datasets and Insiders’ Anticipated Purchases
Econometrics 2023, 11(3), 22; https://doi.org/10.3390/econometrics11030022 - 30 Aug 2023
Abstract
Detecting pump-and-dump schemes involving cryptoassets with high-frequency data is challenging due to imbalanced datasets and the early occurrence of unusual trading volumes. To address these issues, we propose constructing synthetic balanced datasets using resampling methods and flagging a pump-and-dump from the moment of
[...] Read more.
Detecting pump-and-dump schemes involving cryptoassets with high-frequency data is challenging due to imbalanced datasets and the early occurrence of unusual trading volumes. To address these issues, we propose constructing synthetic balanced datasets using resampling methods and flagging a pump-and-dump from the moment of public announcement up to 60 min beforehand. We validated our proposals using data from Pumpolymp and the CryptoCurrency eXchange Trading Library to identify 351 pump signals relative to the Binance crypto exchange in 2021 and 2022. We found that the most effective approach was using the original imbalanced dataset with pump-and-dumps flagged 60 min in advance, together with a random forest model with data segmented into 30-s chunks and regressors computed with a moving window of 1 h. Our analysis revealed that a better balance between sensitivity and specificity could be achieved by simply selecting an appropriate probability threshold, such as setting the threshold close to the observed prevalence in the original dataset. Resampling methods were useful in some cases, but threshold-independent measures were not affected. Moreover, detecting pump-and-dumps in real-time involves high-dimensional data, and the use of resampling methods to build synthetic datasets can be time-consuming, making them less practical.
Full article
Open AccessArticle
Competition–Innovation Nexus: Product vs. Process, Does It Matter?
by
Econometrics 2023, 11(3), 21; https://doi.org/10.3390/econometrics11030021 - 25 Aug 2023
Abstract
I study the relationship between competition and innovation, focusing on the distinction between product and process innovations. By considering product innovation, I expand upon earlier research on the topic of the relationship between competition and innovation, which focused on process innovations. New products
[...] Read more.
I study the relationship between competition and innovation, focusing on the distinction between product and process innovations. By considering product innovation, I expand upon earlier research on the topic of the relationship between competition and innovation, which focused on process innovations. New products allow firms to differentiate themselves from one another. I demonstrate that the competition level that creates the most innovation incentive is higher for process innovation than product innovation. I also provide empirical evidence that supports these results. Using the community innovation survey, I first show that an inverted U-shape characterizes the relationship between competition and both process and product innovations. The optimal competition level for promoting innovation is higher for process innovation.
Full article
Open AccessArticle
Locationally Varying Production Technology and Productivity: The Case of Norwegian Farming
Econometrics 2023, 11(3), 20; https://doi.org/10.3390/econometrics11030020 - 18 Aug 2023
Abstract
►▼
Show Figures
In this study, we leverage geographical coordinates and firm-level panel data to uncover variations in production across different locations. Our approach involves using a semiparametric proxy variable regression estimator, which allows us to define and estimate a customized production function for each firm
[...] Read more.
In this study, we leverage geographical coordinates and firm-level panel data to uncover variations in production across different locations. Our approach involves using a semiparametric proxy variable regression estimator, which allows us to define and estimate a customized production function for each firm and its corresponding location. By employing kernel methods, we estimate the nonparametric functions that determine the model’s parameters based on latitude and longitude. Furthermore, our model incorporates productivity components that consider various factors that influence production. Unlike spatially autoregressive-type production functions that assume a uniform technology across all locations, our approach estimates technology and productivity at both the firm and location levels, taking into account their specific characteristics. To handle endogenous regressors, we incorporate a proxy variable identification technique, distinguishing our method from geographically weighted semiparametric regressions. To investigate the heterogeneity in production technology and productivity among Norwegian grain farmers, we apply our model to a sample of farms using panel data spanning from 2001 to 2020. Through this analysis, we provide empirical evidence of regional variations in both technology and productivity among Norwegian grain farmers. Finally, we discuss the suitability of our approach for addressing the heterogeneity in this industry.
Full article
Figure 1
Open AccessArticle
Tracking ‘Pure’ Systematic Risk with Realized Betas for Bitcoin and Ethereum
by
and
Econometrics 2023, 11(3), 19; https://doi.org/10.3390/econometrics11030019 - 10 Aug 2023
Abstract
►▼
Show Figures
Using the capital asset pricing model, this article critically assesses the relative importance of computing ‘realized’ betas from high-frequency returns for Bitcoin and Ethereum—the two major cryptocurrencies—against their classic counterparts using the 1-day and 5-day return-based betas. The sample includes intraday data from
[...] Read more.
Using the capital asset pricing model, this article critically assesses the relative importance of computing ‘realized’ betas from high-frequency returns for Bitcoin and Ethereum—the two major cryptocurrencies—against their classic counterparts using the 1-day and 5-day return-based betas. The sample includes intraday data from 15 May 2018 until 17 January 2023. The microstructure noise is present until 4 min in the BTC and ETH high-frequency data. Therefore, we opt for a conservative choice with a 60 min sampling frequency. Considering 250 trading days as a rolling-window size, we obtain rolling betas < 1 for Bitcoin and Ethereum with respect to the CRIX market index, which could enhance portfolio diversification (at the expense of maximizing returns). We flag the minimal tracking errors at the hourly and daily frequencies. The dispersion of rolling betas is higher for the weekly frequency and is concentrated towards values of > 0.8 for BTC ( > 0.65 for ETH). The weekly frequency is thus revealed as being less precise for capturing the ‘pure’ systematic risk for Bitcoin and Ethereum. For Ethereum in particular, the availability of high-frequency data tends to produce, on average, a more reliable inference. In the age of financial data feed immediacy, our results strongly suggest to pension fund managers, hedge fund traders, and investment bankers to include ‘realized’ versions of CAPM betas in their dashboard of indicators for portfolio risk estimation. Sensitivity analyses cover jump detection in BTC/ETH high-frequency data (up to 25%). We also include several jump-robust estimators of realized volatility, where realized quadpower volatility prevails.
Full article
Figure 1
Open AccessArticle
Estimation of Realized Asymmetric Stochastic Volatility Models Using Kalman Filter
by
Econometrics 2023, 11(3), 18; https://doi.org/10.3390/econometrics11030018 - 31 Jul 2023
Abstract
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman
[...] Read more.
Despite the growing interest in realized stochastic volatility models, their estimation techniques, such as simulated maximum likelihood (SML), are computationally intensive. Based on the realized volatility equation, this study demonstrates that, in a finite sample, the quasi-maximum likelihood estimator based on the Kalman filter is competitive with the two-step SML estimator, which is less efficient than the SML estimator. Regarding empirical results for the S&P 500 index, the quasi-likelihood ratio tests favored the two-factor realized asymmetric stochastic volatility model with the standardized t distribution among alternative specifications, and an analysis on out-of-sample forecasts prefers the realized stochastic volatility models, rejecting the model without the realized volatility measure. Furthermore, the forecasts of alternative RSV models are statistically equivalent for the data covering the global financial crisis.
Full article
Open AccessArticle
Socio-Economic and Demographic Factors Associated with COVID-19 Mortality in European Regions: Spatial Econometric Analysis
by
and
Econometrics 2023, 11(2), 17; https://doi.org/10.3390/econometrics11020017 - 20 Jun 2023
Abstract
In some NUTS 2 (Nomenclature of Territorial Units for Statistics) regions of Europe, the COVID-19 pandemic has triggered an increase in mortality by several dozen percent and only a few percent in others. Based on the data on 189 regions from 19 European
[...] Read more.
In some NUTS 2 (Nomenclature of Territorial Units for Statistics) regions of Europe, the COVID-19 pandemic has triggered an increase in mortality by several dozen percent and only a few percent in others. Based on the data on 189 regions from 19 European countries, we identified factors responsible for these differences, both intra- and internationally. Due to the spatial nature of the virus diffusion and to account for unobservable country-level and sub-national characteristics, we used spatial econometric tools to estimate two types of models, explaining (i) the number of cases per 10,000 inhabitants and (ii) the percentage increase in the number of deaths compared to the 2016–2019 average in individual regions (mostly NUTS 2) in 2020. We used two weight matrices simultaneously, accounting for both types of spatial autocorrelation: linked to geographical proximity and adherence to the same country. For the feature selection, we used Bayesian Model Averaging. The number of reported cases is negatively correlated with the share of risk groups in the population (60+ years old, older people reporting chronic lower respiratory disease, and high blood pressure) and the level of society’s belief that the positive health effects of restrictions outweighed the economic losses. Furthermore, it positively correlated with GDP per capita (PPS) and the percentage of people employed in the industry. On the contrary, the mortality (per number of infections) has been limited through high-quality healthcare. Additionally, we noticed that the later the pandemic first hit a region, the lower the death toll there was, even controlling for the number of infections.
Full article
(This article belongs to the Special Issue Health Econometrics)
►▼
Show Figures
Figure 1
Open AccessFeature PaperArticle
Skill Mismatch, Nepotism, Job Satisfaction, and Young Females in the MENA Region
Econometrics 2023, 11(2), 16; https://doi.org/10.3390/econometrics11020016 - 12 Jun 2023
Abstract
Skills utilization is an important factor affecting labor productivity and job satisfaction. This paper examines the effects of skills mismatch, nepotism, and gender discrimination on wages and job satisfaction in MENA workplaces. Gender discrimination implies social costs for firms due to higher turnover
[...] Read more.
Skills utilization is an important factor affecting labor productivity and job satisfaction. This paper examines the effects of skills mismatch, nepotism, and gender discrimination on wages and job satisfaction in MENA workplaces. Gender discrimination implies social costs for firms due to higher turnover rates and lower retention levels. Young females suffer disproportionality from this than their male counterparts, resulting in a wider gender gap in the labor market at multiple levels. Therefore, we find that the skill mismatch problem appears to be more significant among specific demographic groups, such as females, immigrants, and ethnic minorities; it is also negatively correlated with job satisfaction and wages. We bridge the literature gap on youth skill mismatch’s main determinants, including nepotism, by showing evidence from some developing countries. Given the implied social costs associated with these practices and their impact on the labor market, we have compiled a list of policy recommendations that the government and relevant stakeholders should take to reduce these problems in the workplace. Therefore, we provide a guide to address MENA’s skill mismatch and improve overall job satisfaction.
Full article
Open AccessArticle
Parameter Estimation of the Heston Volatility Model with Jumps in the Asset Prices
Econometrics 2023, 11(2), 15; https://doi.org/10.3390/econometrics11020015 - 02 Jun 2023
Abstract
The parametric estimation of stochastic differential equations (SDEs) has been the subject of intense studies already for several decades. The Heston model, for instance, is based on two coupled SDEs and is often used in financial mathematics for the dynamics of asset prices
[...] Read more.
The parametric estimation of stochastic differential equations (SDEs) has been the subject of intense studies already for several decades. The Heston model, for instance, is based on two coupled SDEs and is often used in financial mathematics for the dynamics of asset prices and their volatility. Calibrating it to real data would be very useful in many practical scenarios. It is very challenging, however, since the volatility is not directly observable. In this paper, a complete estimation procedure of the Heston model without and with jumps in the asset prices is presented. Bayesian regression combined with the particle filtering method is used as the estimation framework. Within the framework, we propose a novel approach to handle jumps in order to neutralise their negative impact on the estimates of the key parameters of the model. An improvement in the sampling in the particle filtering method is discussed as well. Our analysis is supported by numerical simulations of the Heston model to investigate the performance of the estimators. In addition, a practical follow-along recipe is given to allow finding adequate estimates from any given data.
Full article
(This article belongs to the Special Issue Topics in Computational Econometrics and Finance: Theory and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Factorization of a Spectral Density with Smooth Eigenvalues of a Multidimensional Stationary Time Series
Econometrics 2023, 11(2), 14; https://doi.org/10.3390/econometrics11020014 - 31 May 2023
Abstract
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation
[...] Read more.
The aim of this paper to give a multidimensional version of the classical one-dimensional case of smooth spectral density. A spectral density with smooth eigenvalues and eigenvectors gives an explicit method to factorize the spectral density and compute the Wold representation of a weakly stationary time series. A formula, similar to the Kolmogorov–Szeg formula, is given for the covariance matrix of the innovations. These results are important to give the best linear predictions of the time series. The results are applicable when the rank of the process is smaller than the dimension of the process, which occurs frequently in many current applications, including econometrics.
Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
Open AccessArticle
Online Hybrid Neural Network for Stock Price Prediction: A Case Study of High-Frequency Stock Trading in the Chinese Market
Econometrics 2023, 11(2), 13; https://doi.org/10.3390/econometrics11020013 - 18 May 2023
Abstract
►▼
Show Figures
Time-series data, which exhibit a low signal-to-noise ratio, non-stationarity, and non-linearity, are commonly seen in high-frequency stock trading, where the objective is to increase the likelihood of profit by taking advantage of tiny discrepancies in prices and trading on them quickly and in
[...] Read more.
Time-series data, which exhibit a low signal-to-noise ratio, non-stationarity, and non-linearity, are commonly seen in high-frequency stock trading, where the objective is to increase the likelihood of profit by taking advantage of tiny discrepancies in prices and trading on them quickly and in huge quantities. For this purpose, it is essential to apply a trading method that is capable of fast and accurate prediction from such time-series data. In this paper, we developed an online time series forecasting method for high-frequency trading (HFT) by integrating three neural network deep learning models, i.e., long short-term memory (LSTM), gated recurrent unit (GRU), and transformer; and we abbreviate the new method to online LGT or O-LGT. The key innovation underlying our method is its efficient storage management, which enables super-fast computing. Specifically, when computing the forecast for the immediate future, we only use the output calculated from the previous trading data (rather than the previous trading data themselves) together with the current trading data. Thus, the computing only involves updating the current data into the process. We evaluated the performance of O-LGT by analyzing high-frequency limit order book (LOB) data from the Chinese market. It shows that, in most cases, our model achieves a similar speed with a much higher accuracy than the conventional fast supervised learning models for HFT. However, with a slight sacrifice in accuracy, O-LGT is approximately 12 to 64 times faster than the existing high-accuracy neural network models for LOB data from the Chinese market.
Full article
Figure 1
Open AccessArticle
Local Gaussian Cross-Spectrum Analysis
by
and
Econometrics 2023, 11(2), 12; https://doi.org/10.3390/econometrics11020012 - 21 Apr 2023
Abstract
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by
[...] Read more.
The ordinary spectrum is restricted in its applications, since it is based on the second-order moments (auto- and cross-covariances). Alternative approaches to spectrum analysis have been investigated based on other measures of dependence. One such approach was developed for univariate time series by the authors of this paper using the local Gaussian auto-spectrum based on the local Gaussian auto-correlations. This makes it possible to detect local structures in univariate time series that look similar to white noise when investigated by the ordinary auto-spectrum. In this paper, the local Gaussian approach is extended to a local Gaussian cross-spectrum for multivariate time series. The local Gaussian cross-spectrum has the desirable property that it coincides with the ordinary cross-spectrum for Gaussian time series, which implies that it can be used to detect non-Gaussian traits in the time series under investigation. In particular, if the ordinary spectrum is flat, then peaks and troughs of the local Gaussian spectrum can indicate nonlinear traits, which potentially might reveal local periodic phenomena that are undetected in an ordinary spectral analysis.
Full article
(This article belongs to the Special Issue Topics in Computational Econometrics and Finance: Theory and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Information-Criterion-Based Lag Length Selection in Vector Autoregressive Approximations for I(2) Processes
Econometrics 2023, 11(2), 11; https://doi.org/10.3390/econometrics11020011 - 20 Apr 2023
Abstract
►▼
Show Figures
When using vector autoregressive (VAR) models for approximating time series, a key step is the selection of the lag length. Often this is performed using information criteria, even if a theoretical justification is lacking in some cases. For stationary processes, the asymptotic properties
[...] Read more.
When using vector autoregressive (VAR) models for approximating time series, a key step is the selection of the lag length. Often this is performed using information criteria, even if a theoretical justification is lacking in some cases. For stationary processes, the asymptotic properties of the corresponding estimators are well documented in great generality in the book Hannan and Deistler (1988). If the data-generating process is not a finite-order VAR, the selected lag length typically tends to infinity as a function of the sample size. For invertible vector autoregressive moving average (VARMA) processes, this typically happens roughly proportional to . The same approach for lag length selection is also followed in practice for more general processes, for example, unit root processes. In the I(1) case, the literature suggests that the behavior is analogous to the stationary case. For I(2) processes, no such results are currently known. This note closes this gap, concluding that information-criteria-based lag length selection for I(2) processes indeed shows similar properties to in the stationary case.
Full article
Figure 1
Open AccessArticle
Modeling COVID-19 Infection Rates by Regime-Switching Unobserved Components Models
by
and
Econometrics 2023, 11(2), 10; https://doi.org/10.3390/econometrics11020010 - 03 Apr 2023
Abstract
The COVID-19 pandemic is characterized by a recurring sequence of peaks and troughs. This article proposes a regime-switching unobserved components (UC) approach to model the trend of COVID-19 infections as a function of this ebb and flow pattern. Estimated regime probabilities indicate the
[...] Read more.
The COVID-19 pandemic is characterized by a recurring sequence of peaks and troughs. This article proposes a regime-switching unobserved components (UC) approach to model the trend of COVID-19 infections as a function of this ebb and flow pattern. Estimated regime probabilities indicate the prevalence of either an infection up- or down-turning regime for every day of the observational period. This method provides an intuitive real-time analysis of the state of the pandemic as well as a tool for identifying structural changes ex post. We find that when applied to U.S. data, the model closely tracks regime changes caused by viral mutations, policy interventions, and public behavior.
Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
►▼
Show Figures
Figure 1
Open AccessArticle
Detecting Common Bubbles in Multivariate Mixed Causal–Noncausal Models
Econometrics 2023, 11(1), 9; https://doi.org/10.3390/econometrics11010009 - 09 Mar 2023
Abstract
►▼
Show Figures
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical
[...] Read more.
This paper proposes concepts and methods to investigate whether the bubble patterns observed in individual time series are common among them. Having established the conditions under which common bubbles are present within the class of mixed causal–noncausal vector autoregressive models, we suggest statistical tools to detect the common locally explosive dynamics in a Student t-distribution maximum likelihood framework. The performances of both likelihood ratio tests and information criteria were investigated in a Monte Carlo study. Finally, we evaluated the practical value of our approach via an empirical application on three commodity prices.
Full article
Figure 1
Open AccessArticle
Semi-Metric Portfolio Optimization: A New Algorithm Reducing Simultaneous Asset Shocks
Econometrics 2023, 11(1), 8; https://doi.org/10.3390/econometrics11010008 - 07 Mar 2023
Cited by 2
Abstract
►▼
Show Figures
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we
[...] Read more.
This paper proposes a new method for financial portfolio optimization based on reducing simultaneous asset shocks across a collection of assets. This may be understood as an alternative approach to risk reduction in a portfolio based on a new mathematical quantity. First, we apply recently introduced semi-metrics between finite sets to determine the distance between time series’ structural breaks. Then, we build on the classical portfolio optimization theory of Markowitz and use this distance between asset structural breaks for our penalty function, rather than portfolio variance. Our experiments are promising: on synthetic data, we show that our proposed method does indeed diversify among time series with highly similar structural breaks and enjoys advantages over existing metrics between sets. On real data, experiments illustrate that our proposed optimization method performs well relative to nine other commonly used options, producing the second-highest returns, the lowest volatility, and second-lowest drawdown. The main implication for this method in portfolio management is reducing simultaneous asset shocks and potentially sharp associated drawdowns during periods of highly similar structural breaks, such as a market crisis. Our method adds to a considerable literature of portfolio optimization techniques in econometrics and could complement these via portfolio averaging.
Full article
Figure 1
Open AccessArticle
Causal Vector Autoregression Enhanced with Covariance and Order Selection
by
, , , , , , , and
Econometrics 2023, 11(1), 7; https://doi.org/10.3390/econometrics11010007 - 24 Feb 2023
Abstract
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations
[...] Read more.
A causal vector autoregressive (CVAR) model is introduced for weakly stationary multivariate processes, combining a recursive directed graphical model for the contemporaneous components and a vector autoregressive model longitudinally. Block Cholesky decomposition with varying block sizes is used to solve the model equations and estimate the path coefficients along a directed acyclic graph (DAG). If the DAG is decomposable, i.e., the zeros form a reducible zero pattern (RZP) in its adjacency matrix, then covariance selection is applied that assigns zeros to the corresponding path coefficients. Real-life applications are also considered, where for the optimal order of the fitted CVAR model, order selection is performed with various information criteria.
Full article
(This article belongs to the Special Issue High-Dimensional Time Series in Macroeconomics and Finance)
►▼
Show Figures
Figure 1
Open AccessArticle
Exploring Industry-Distress Effects on Loan Recovery: A Double Machine Learning Approach for Quantiles
by
and
Econometrics 2023, 11(1), 6; https://doi.org/10.3390/econometrics11010006 - 14 Feb 2023
Abstract
►▼
Show Figures
In this study, we explore the effect of industry distress on recovery rates by using the unconditional quantile regression (UQR). The UQR provides better interpretative and thus policy-relevant information on the predictive effect of the target variable than the conditional quantile regression. To
[...] Read more.
In this study, we explore the effect of industry distress on recovery rates by using the unconditional quantile regression (UQR). The UQR provides better interpretative and thus policy-relevant information on the predictive effect of the target variable than the conditional quantile regression. To deal with a broad set of macroeconomic and industry variables, we use the lasso-based double selection to estimate the predictive effects of industry distress and select relevant variables. Our sample consists of 5334 debt and loan instruments in Moody’s Default and Recovery Database from 1990 to 2017. The results show that industry distress decreases recovery rates from 15.80% to 2.94% for the 15th to 55th percentile range and slightly increases the recovery rates in the lower and the upper tails. The UQR provide quantitative measurements to the loss given default during a downturn that the Basel Capital Accord requires.
Full article
Figure 1
Open AccessArticle
Building Multivariate Time-Varying Smooth Transition Correlation GARCH Models, with an Application to the Four Largest Australian Banks
Econometrics 2023, 11(1), 5; https://doi.org/10.3390/econometrics11010005 - 06 Feb 2023
Cited by 1
Abstract
►▼
Show Figures
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of
[...] Read more.
This paper proposes a methodology for building Multivariate Time-Varying STCC–GARCH models. The novel contributions in this area are the specification tests related to the correlation component, the extension of the general model to allow for additional correlation regimes, and a detailed exposition of the systematic, improved modelling cycle required for such nonlinear models. There is an R-package that includes the steps in the modelling cycle. Simulations demonstrate the robustness of the recommended model building approach. The modelling cycle is illustrated using daily return series for Australia’s four largest banks.
Full article
Figure 1
Open AccessFeature PaperArticle
Comparing the Conditional Logit Estimates and True Parameters under Preference Heterogeneity: A Simulated Discrete Choice Experiment
Econometrics 2023, 11(1), 4; https://doi.org/10.3390/econometrics11010004 - 25 Jan 2023
Cited by 1
Abstract
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between
[...] Read more.
Health preference research (HPR) is the subfield of health economics dedicated to understanding the value of health and health-related objects using observational or experimental methods. In a discrete choice experiment (DCE), the utility of objects in a choice set may differ systematically between persons due to interpersonal heterogeneity (e.g., brand-name medication, generic medication, no medication). To allow for interpersonal heterogeneity, choice probabilities may be described using logit functions with fixed individual-specific parameters. However, in practice, a study team may ignore heterogeneity in health preferences and estimate a conditional logit (CL) model. In this simulation study, we examine the effects of omitted variance and correlations (i.e., omitted heterogeneity) in logit parameters on the estimation of the coefficients, willingness to pay (WTP), and choice predictions. The simulated DCE results show that CL estimates may have been biased depending on the structure of the heterogeneity that we used in the data generation process. We also found that these biases in the coefficients led to a substantial difference in the true and estimated WTP (i.e., up to 20%). We further found that CL and true choice probabilities were similar to each other (i.e., difference was less than 0.08) regardless of the underlying structure. The results imply that, under preference heterogeneity, CL estimates may differ from their true means, and these differences can have substantive effects on the WTP estimates. More specifically, CL WTP estimates may be underestimated due to interpersonal heterogeneity, and a failure to recognize this bias in HPR indirectly underestimates the value of treatment, substantially reducing quality of care. These findings have important implications in health economics because CL remains widely used in practice.
Full article
(This article belongs to the Special Issue Health Econometrics)
►▼
Show Figures
Figure 1
Open AccessEditorial
Acknowledgment to the Reviewers of Econometrics in 2022
Econometrics 2023, 11(1), 3; https://doi.org/10.3390/econometrics11010003 - 19 Jan 2023
Abstract
High-quality academic publishing is built on rigorous peer review [...]
Full article
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Topical Collections
Topical Collection in
Econometrics
Econometric Analysis of Climate Change
Collection Editors: Claudio Morana, J. Isaac Miller