Listing 1 - 10 of 15 | << page >> |
Sort by
|
Choose an application
Skewness --- Bridge decks --- Computerized simulation --- Skewness --- Bridge decks --- Computerized simulation
Choose an application
Yield line method --- Slabs --- Reinforcement structures --- Panels --- Skewness --- Yield line method --- Slabs --- Reinforcement structures --- Panels --- Skewness
Choose an application
Regardless of the field, forecasts are widely used and yet assessments of the embedded uncertainty-the magnitude of the downside and upside risks of the prediction itself-are often missing. Particularly in policy-making and investment, accounting for these risks around baseline predictions is of outstanding importance for making better and more informed decisions. This paper introduces a procedure to assess risks associated with a random phenomenon. The methodology assigns probability distributions to baseline-projections of an economic or social random variable-for example gross domestic product growth, inflation, population growth, poverty headcount, and so forth-combining ex-post and ex-ante market information. The generated asymmetric density forecasts use information derived from surveys on expectations and implied statistics of predictive models. The methodology also decomposes the variance and skewness of the predictive distribution accounting for the shares of selected risk factors. The procedure relies on a Bayesian information-theoretical approach, which allows the inclusion of judgment and forecaster expertise. For reliability purposes and transparency, the paper also evaluates the constructed density forecasts assigning a score. The continuous ranked probability score is used to assess the prediction accuracy of elicited density forecasts. The selected score incentivizes the forecaster to provide its true and best predictive distribution. An empirical application to forecast world gross domestic product growth is used to test the Bayesian entropy methodology. Predictive variance and skewness of world gross domestic product growth are associated with ex-ante information of four risk factors: term spreads, absolute deviations of headline inflation targets, energy prices, and the Standard and Poor's 500 index prices. The Bayesian entropy technique is benchmarked with naive-generated density forecasts that utilize information from historical forecast errors. The results show that the Bayesian density forecasts outperform the naive-generated benchmark predictions, illustrating the value added of the introduced methodology.
Bayesian Entropy --- Decomposition --- Economic Growth --- Forecast Uncertainty --- Implied Volatility --- Risk --- Scoring Rules --- Skewness --- Variance
Choose an application
Nonparametric statistics provide a scientific methodology for cases where customary statistics are not applicable. Nonparametric statistics are used when the requirements for parametric analysis fail, such as when data are not normally distributed or the sample size is too small. The method provides an alternative for such cases and is often nearly as powerful as parametric statistics. Another advantage of nonparametric statistics is that it offers analytical methods that are not available otherwise. In social sciences, often, it is not possible to obtain measurements, which renders customary analysis impossible. For example, it is not possible to measure utility but is possible to rank preference, which is based on the unmeasurable utility. Nonparametric methods provide theoretically valid options for analysis, making the use of unscientific methods unnecessary. Nonparametric methods are intuitive and simple to comprehend, which helps researchers in the social sciences understand the methods in spite of lacking mathematical rigor needed in analytical methods customarily used in science. The only prerequisite for this book is high school level elementary algebra. This book is a methodology book and bypasses theoretical proofs while providing comprehensive explanations of the logic behind the methods and ample examples, which are all solved using direct computations as well as by using Stata. The book is arranged into two integrated volumes. Although each volume, and for that matter each chapter, can be used separately, it is advisable to read as much of both volumes as possible; because familiarity with what is applicable for different problems will enhance capabilities. It is recommended that everyone read the Introduction and Chapter 1 because determining whether data are random or normally distributed is essential in the selection of parametric versus nonparametric methods.
Nonparametric statistics. --- Nonparametric statistics --- median --- order statistics --- rank --- one sample --- two samples --- several samples --- multiple comparison --- normality --- skewness
Choose an application
Nonparametric statistics provide a scientific methodology for cases where customary statistics are not applicable. Nonparametric statistics are used when the requirements for parametric analysis fail, such as when data are not normally distributed or the sample size is too small. The method provides an alternative for such cases and is often nearly as powerful as parametric statistics. Another advantage of nonparametric statistics is that it offers analytical methods that are not available otherwise. In social sciences, often, it is not possible to obtain measurements, which renders customary analysis impossible. For example, it is not possible to measure utility but is possible to rank preference, which is based on the unmeasurable utility. Nonparametric methods provide theoretically valid options for analysis, making the use of unscientific methods unnecessary. Nonparametric methods are intuitive and simple to comprehend, which helps researchers in the social sciences understand the methods in spite of lacking mathematical rigor needed in analytical methods customarily used in science. The only prerequisite for this book is high school level elementary algebra. This book is a methodology book and bypasses theoretical proofs while providing comprehensive explanations of the logic behind the methods and ample examples, which are all solved using direct computations as well as by using Stata. The book is arranged into two integrated volumes. Although each volume, and for that matter each chapter, can be used separately, it is advisable to read as much of both volumes as possible; because familiarity with what is applicable for different problems will enhance capabilities. It is recommended that everyone read the Introduction and Chapter 1 because determining whether data are random or normally distributed is essential in the selection of parametric versus nonparametric methods.
Nonparametric statistics. --- Nonparametric statistics --- median --- order statistics --- rank --- one sample --- two samples --- several samples --- multiple comparison --- normality --- skewness
Choose an application
In recent years, the advances and abilities of computer software have substantially increased the number of scientific publications that seek to introduce new probabilistic modelling frameworks, including continuous and discrete approaches, and univariate and multivariate models. Many of these theoretical and applied statistical works are related to distributions that try to break the symmetry of the normal distribution and other similar symmetric models, mainly using Azzalini's scheme. This strategy uses a symmetric distribution as a baseline case, then an extra parameter is added to the parent model to control the skewness of the new family of probability distributions. The most widespread and popular model is the one based on the normal distribution that produces the skewed normal distribution. In this Special Issue on symmetric and asymmetric distributions, works related to this topic are presented, as well as theoretical and applied proposals that have connections with and implications for this topic. Immediate applications of this line of work include different scenarios such as economics, environmental sciences, biometrics, engineering, health, etc. This Special Issue comprises nine works that follow this methodology derived using a simple process while retaining the rigor that the subject deserves. Readers of this Issue will surely find future lines of work that will enable them to achieve fruitful research results.
Humanities --- Social interaction --- positive and negative skewness --- ordering --- fitting distributions --- Epsilon-skew-Normal --- Epsilon-skew-Cauchy --- bivariate densities --- generalized Cauchy distributions --- asymmetric bimodal distribution --- bimodal --- maximum likelihood --- slashed half-normal distribution --- kurtosis --- likelihood --- EM algorithm --- flexible skew-normal distribution --- skew Birnbaum–Saunders distribution --- bimodality --- maximum likelihood estimation --- Fisher information matrix --- maximum likelihood estimates --- type I and II censoring --- skewness coefficient --- Weibull censored data --- truncation --- half-normal distribution --- probabilistic distribution class --- normal distribution --- identifiability --- moments --- power-normal distribution --- positive and negative skewness --- ordering --- fitting distributions --- Epsilon-skew-Normal --- Epsilon-skew-Cauchy --- bivariate densities --- generalized Cauchy distributions --- asymmetric bimodal distribution --- bimodal --- maximum likelihood --- slashed half-normal distribution --- kurtosis --- likelihood --- EM algorithm --- flexible skew-normal distribution --- skew Birnbaum–Saunders distribution --- bimodality --- maximum likelihood estimation --- Fisher information matrix --- maximum likelihood estimates --- type I and II censoring --- skewness coefficient --- Weibull censored data --- truncation --- half-normal distribution --- probabilistic distribution class --- normal distribution --- identifiability --- moments --- power-normal distribution
Choose an application
In recent years, the advances and abilities of computer software have substantially increased the number of scientific publications that seek to introduce new probabilistic modelling frameworks, including continuous and discrete approaches, and univariate and multivariate models. Many of these theoretical and applied statistical works are related to distributions that try to break the symmetry of the normal distribution and other similar symmetric models, mainly using Azzalini's scheme. This strategy uses a symmetric distribution as a baseline case, then an extra parameter is added to the parent model to control the skewness of the new family of probability distributions. The most widespread and popular model is the one based on the normal distribution that produces the skewed normal distribution. In this Special Issue on symmetric and asymmetric distributions, works related to this topic are presented, as well as theoretical and applied proposals that have connections with and implications for this topic. Immediate applications of this line of work include different scenarios such as economics, environmental sciences, biometrics, engineering, health, etc. This Special Issue comprises nine works that follow this methodology derived using a simple process while retaining the rigor that the subject deserves. Readers of this Issue will surely find future lines of work that will enable them to achieve fruitful research results.
Humanities --- Social interaction --- positive and negative skewness --- ordering --- fitting distributions --- Epsilon-skew-Normal --- Epsilon-skew-Cauchy --- bivariate densities --- generalized Cauchy distributions --- asymmetric bimodal distribution --- bimodal --- maximum likelihood --- slashed half-normal distribution --- kurtosis --- likelihood --- EM algorithm --- flexible skew-normal distribution --- skew Birnbaum–Saunders distribution --- bimodality --- maximum likelihood estimation --- Fisher information matrix --- maximum likelihood estimates --- type I and II censoring --- skewness coefficient --- Weibull censored data --- truncation --- half-normal distribution --- probabilistic distribution class --- normal distribution --- identifiability --- moments --- power-normal distribution
Choose an application
In recent years, the advances and abilities of computer software have substantially increased the number of scientific publications that seek to introduce new probabilistic modelling frameworks, including continuous and discrete approaches, and univariate and multivariate models. Many of these theoretical and applied statistical works are related to distributions that try to break the symmetry of the normal distribution and other similar symmetric models, mainly using Azzalini's scheme. This strategy uses a symmetric distribution as a baseline case, then an extra parameter is added to the parent model to control the skewness of the new family of probability distributions. The most widespread and popular model is the one based on the normal distribution that produces the skewed normal distribution. In this Special Issue on symmetric and asymmetric distributions, works related to this topic are presented, as well as theoretical and applied proposals that have connections with and implications for this topic. Immediate applications of this line of work include different scenarios such as economics, environmental sciences, biometrics, engineering, health, etc. This Special Issue comprises nine works that follow this methodology derived using a simple process while retaining the rigor that the subject deserves. Readers of this Issue will surely find future lines of work that will enable them to achieve fruitful research results.
positive and negative skewness --- ordering --- fitting distributions --- Epsilon-skew-Normal --- Epsilon-skew-Cauchy --- bivariate densities --- generalized Cauchy distributions --- asymmetric bimodal distribution --- bimodal --- maximum likelihood --- slashed half-normal distribution --- kurtosis --- likelihood --- EM algorithm --- flexible skew-normal distribution --- skew Birnbaum–Saunders distribution --- bimodality --- maximum likelihood estimation --- Fisher information matrix --- maximum likelihood estimates --- type I and II censoring --- skewness coefficient --- Weibull censored data --- truncation --- half-normal distribution --- probabilistic distribution class --- normal distribution --- identifiability --- moments --- power-normal distribution
Choose an application
Recently, considerable attention has been placed on the development and application of tools useful for the analysis of the high-dimensional and/or high-frequency datasets that now dominate the landscape. The purpose of this Special Issue is to collect both methodological and empirical papers that develop and utilize state-of-the-art econometric techniques for the analysis of such data.
level, slope, and curvature of the yield curve --- Nelson-Siegel factors --- supervised factor models --- combining forecasts --- principal components --- Minimum variance portfolio --- risk --- shrinkage --- S& --- P 500 --- high-frequency --- volatility --- forecasting --- realized measures --- bivariate GARCH --- Japanese candlestick --- ordered fuzzy number --- Kosiński’s number --- oriented fuzzy number --- dynamic analysis of securities --- integrated volatility --- high-frequency data --- jumps --- realized skewness --- cross-sectional stock returns --- signed jump variation --- long-range dependence --- log periodogram regression --- smoothed periodogram --- subsampling --- intraday returns --- portfolio selection --- maximum diversification --- regularization
Choose an application
Recently, considerable attention has been placed on the development and application of tools useful for the analysis of the high-dimensional and/or high-frequency datasets that now dominate the landscape. The purpose of this Special Issue is to collect both methodological and empirical papers that develop and utilize state-of-the-art econometric techniques for the analysis of such data.
Economics, finance, business & management --- level, slope, and curvature of the yield curve --- Nelson-Siegel factors --- supervised factor models --- combining forecasts --- principal components --- Minimum variance portfolio --- risk --- shrinkage --- S& --- P 500 --- high-frequency --- volatility --- forecasting --- realized measures --- bivariate GARCH --- Japanese candlestick --- ordered fuzzy number --- Kosiński’s number --- oriented fuzzy number --- dynamic analysis of securities --- integrated volatility --- high-frequency data --- jumps --- realized skewness --- cross-sectional stock returns --- signed jump variation --- long-range dependence --- log periodogram regression --- smoothed periodogram --- subsampling --- intraday returns --- portfolio selection --- maximum diversification --- regularization --- level, slope, and curvature of the yield curve --- Nelson-Siegel factors --- supervised factor models --- combining forecasts --- principal components --- Minimum variance portfolio --- risk --- shrinkage --- S& --- P 500 --- high-frequency --- volatility --- forecasting --- realized measures --- bivariate GARCH --- Japanese candlestick --- ordered fuzzy number --- Kosiński’s number --- oriented fuzzy number --- dynamic analysis of securities --- integrated volatility --- high-frequency data --- jumps --- realized skewness --- cross-sectional stock returns --- signed jump variation --- long-range dependence --- log periodogram regression --- smoothed periodogram --- subsampling --- intraday returns --- portfolio selection --- maximum diversification --- regularization
Listing 1 - 10 of 15 | << page >> |
Sort by
|