Listing 1 - 7 of 7 |
Sort by
|
Choose an application
The scope of the contributions to this book will be to present new and original research papers based on MPHIE, MHD, and MDPDE, as well as test statistics based on these estimators from a theoretical and applied point of view in different statistical problems with special emphasis on robustness. Manuscripts given solutions to different statistical problems as model selection criteria based on divergence measures or in statistics for high-dimensional data with divergence measures as loss function are considered. Reviews making emphasis in the most recent state-of-the art in relation to the solution of statistical problems base on divergence measures are also presented.
Research & information: general --- classification --- Bayes error rate --- Henze–Penrose divergence --- Friedman–Rafsky test statistic --- convergence rates --- bias and variance trade-off --- concentration bounds --- minimal spanning trees --- composite likelihood --- composite minimum density power divergence estimators --- model selection --- minimum pseudodistance estimation --- Robustness --- estimation of α --- monitoring --- numerical minimization --- S-estimation --- Tukey’s biweight --- integer-valued time series --- one-parameter exponential family --- minimum density power divergence estimator --- density power divergence --- robust change point test --- Galton-Watson branching processes with immigration --- Hellinger integrals --- power divergences --- Kullback-Leibler information distance/divergence --- relative entropy --- Renyi divergences --- epidemiology --- COVID-19 pandemic --- Bayesian decision making --- INARCH(1) model --- GLM model --- Bhattacharyya coefficient/distance --- time series of counts --- INGARCH model --- SPC --- CUSUM monitoring --- MDPDE --- contingency tables --- disparity --- mixed-scale data --- pearson residuals --- residual adjustment function --- robustness --- statistical distances --- Hellinger distance --- large deviations --- divergence measures --- rare event probabilities --- n/a --- Henze-Penrose divergence --- Friedman-Rafsky test statistic --- Tukey's biweight
Choose an application
The scope of the contributions to this book will be to present new and original research papers based on MPHIE, MHD, and MDPDE, as well as test statistics based on these estimators from a theoretical and applied point of view in different statistical problems with special emphasis on robustness. Manuscripts given solutions to different statistical problems as model selection criteria based on divergence measures or in statistics for high-dimensional data with divergence measures as loss function are considered. Reviews making emphasis in the most recent state-of-the art in relation to the solution of statistical problems base on divergence measures are also presented.
classification --- Bayes error rate --- Henze–Penrose divergence --- Friedman–Rafsky test statistic --- convergence rates --- bias and variance trade-off --- concentration bounds --- minimal spanning trees --- composite likelihood --- composite minimum density power divergence estimators --- model selection --- minimum pseudodistance estimation --- Robustness --- estimation of α --- monitoring --- numerical minimization --- S-estimation --- Tukey’s biweight --- integer-valued time series --- one-parameter exponential family --- minimum density power divergence estimator --- density power divergence --- robust change point test --- Galton-Watson branching processes with immigration --- Hellinger integrals --- power divergences --- Kullback-Leibler information distance/divergence --- relative entropy --- Renyi divergences --- epidemiology --- COVID-19 pandemic --- Bayesian decision making --- INARCH(1) model --- GLM model --- Bhattacharyya coefficient/distance --- time series of counts --- INGARCH model --- SPC --- CUSUM monitoring --- MDPDE --- contingency tables --- disparity --- mixed-scale data --- pearson residuals --- residual adjustment function --- robustness --- statistical distances --- Hellinger distance --- large deviations --- divergence measures --- rare event probabilities --- n/a --- Henze-Penrose divergence --- Friedman-Rafsky test statistic --- Tukey's biweight
Choose an application
The scope of the contributions to this book will be to present new and original research papers based on MPHIE, MHD, and MDPDE, as well as test statistics based on these estimators from a theoretical and applied point of view in different statistical problems with special emphasis on robustness. Manuscripts given solutions to different statistical problems as model selection criteria based on divergence measures or in statistics for high-dimensional data with divergence measures as loss function are considered. Reviews making emphasis in the most recent state-of-the art in relation to the solution of statistical problems base on divergence measures are also presented.
Research & information: general --- classification --- Bayes error rate --- Henze-Penrose divergence --- Friedman-Rafsky test statistic --- convergence rates --- bias and variance trade-off --- concentration bounds --- minimal spanning trees --- composite likelihood --- composite minimum density power divergence estimators --- model selection --- minimum pseudodistance estimation --- Robustness --- estimation of α --- monitoring --- numerical minimization --- S-estimation --- Tukey's biweight --- integer-valued time series --- one-parameter exponential family --- minimum density power divergence estimator --- density power divergence --- robust change point test --- Galton-Watson branching processes with immigration --- Hellinger integrals --- power divergences --- Kullback-Leibler information distance/divergence --- relative entropy --- Renyi divergences --- epidemiology --- COVID-19 pandemic --- Bayesian decision making --- INARCH(1) model --- GLM model --- Bhattacharyya coefficient/distance --- time series of counts --- INGARCH model --- SPC --- CUSUM monitoring --- MDPDE --- contingency tables --- disparity --- mixed-scale data --- pearson residuals --- residual adjustment function --- robustness --- statistical distances --- Hellinger distance --- large deviations --- divergence measures --- rare event probabilities
Choose an application
The analysis and modeling of time series is of the utmost importance in various fields of application. This Special Issue is a collection of articles on a wide range of topics, covering stochastic models for time series as well as methods for their analysis, univariate and multivariate time series, real-valued and discrete-valued time series, applications of time series methods to forecasting and statistical process control, and software implementations of methods and models for time series. The proposed approaches and concepts are thoroughly discussed and illustrated with several real-world data examples.
Humanities --- time series --- anomaly detection --- unsupervised learning --- kernel density estimation --- missing data --- multivariate time series --- nonstationary --- spectral matrix --- local field potential --- electric power --- forecasting accuracy --- machine learning --- extended binomial distribution --- INAR --- thinning operator --- time series of counts --- unemployment rate --- SARIMA --- SETAR --- Holt–Winters --- ETS --- neural network autoregression --- Romania --- integer-valued time series --- bivariate Poisson INGARCH model --- outliers --- robust estimation --- minimum density power divergence estimator --- CUSUM control chart --- INAR-type time series --- statistical process monitoring --- random survival rate --- zero-inflation --- cointegration --- subspace algorithms --- VARMA models --- seasonality --- finance --- volatility fluctuation --- Student’s t-process --- entropy based particle filter --- relative entropy --- count data --- time series analysis --- Julia programming language --- ordinal patterns --- long-range dependence --- multivariate data analysis --- limit theorems --- integer-valued moving average model --- counting series --- dispersion test --- Bell distribution --- count time series --- estimation --- overdispersion --- multivariate count data --- INGACRCH --- state-space model --- bank failures --- transactions --- periodic autoregression --- integer-valued threshold models --- parameter estimation --- models
Choose an application
The analysis and modeling of time series is of the utmost importance in various fields of application. This Special Issue is a collection of articles on a wide range of topics, covering stochastic models for time series as well as methods for their analysis, univariate and multivariate time series, real-valued and discrete-valued time series, applications of time series methods to forecasting and statistical process control, and software implementations of methods and models for time series. The proposed approaches and concepts are thoroughly discussed and illustrated with several real-world data examples.
time series --- anomaly detection --- unsupervised learning --- kernel density estimation --- missing data --- multivariate time series --- nonstationary --- spectral matrix --- local field potential --- electric power --- forecasting accuracy --- machine learning --- extended binomial distribution --- INAR --- thinning operator --- time series of counts --- unemployment rate --- SARIMA --- SETAR --- Holt–Winters --- ETS --- neural network autoregression --- Romania --- integer-valued time series --- bivariate Poisson INGARCH model --- outliers --- robust estimation --- minimum density power divergence estimator --- CUSUM control chart --- INAR-type time series --- statistical process monitoring --- random survival rate --- zero-inflation --- cointegration --- subspace algorithms --- VARMA models --- seasonality --- finance --- volatility fluctuation --- Student’s t-process --- entropy based particle filter --- relative entropy --- count data --- time series analysis --- Julia programming language --- ordinal patterns --- long-range dependence --- multivariate data analysis --- limit theorems --- integer-valued moving average model --- counting series --- dispersion test --- Bell distribution --- count time series --- estimation --- overdispersion --- multivariate count data --- INGACRCH --- state-space model --- bank failures --- transactions --- periodic autoregression --- integer-valued threshold models --- parameter estimation --- models
Choose an application
The analysis and modeling of time series is of the utmost importance in various fields of application. This Special Issue is a collection of articles on a wide range of topics, covering stochastic models for time series as well as methods for their analysis, univariate and multivariate time series, real-valued and discrete-valued time series, applications of time series methods to forecasting and statistical process control, and software implementations of methods and models for time series. The proposed approaches and concepts are thoroughly discussed and illustrated with several real-world data examples.
Humanities --- time series --- anomaly detection --- unsupervised learning --- kernel density estimation --- missing data --- multivariate time series --- nonstationary --- spectral matrix --- local field potential --- electric power --- forecasting accuracy --- machine learning --- extended binomial distribution --- INAR --- thinning operator --- time series of counts --- unemployment rate --- SARIMA --- SETAR --- Holt–Winters --- ETS --- neural network autoregression --- Romania --- integer-valued time series --- bivariate Poisson INGARCH model --- outliers --- robust estimation --- minimum density power divergence estimator --- CUSUM control chart --- INAR-type time series --- statistical process monitoring --- random survival rate --- zero-inflation --- cointegration --- subspace algorithms --- VARMA models --- seasonality --- finance --- volatility fluctuation --- Student’s t-process --- entropy based particle filter --- relative entropy --- count data --- time series analysis --- Julia programming language --- ordinal patterns --- long-range dependence --- multivariate data analysis --- limit theorems --- integer-valued moving average model --- counting series --- dispersion test --- Bell distribution --- count time series --- estimation --- overdispersion --- multivariate count data --- INGACRCH --- state-space model --- bank failures --- transactions --- periodic autoregression --- integer-valued threshold models --- parameter estimation --- models
Choose an application
This book presents new and original research in Statistical Information Theory, based on minimum divergence estimators and test statistics, from a theoretical and applied point of view, for different statistical problems with special emphasis on efficiency and robustness. Divergence statistics, based on maximum likelihood estimators, as well as Wald’s statistics, likelihood ratio statistics and Rao’s score statistics, share several optimum asymptotic properties, but are highly non-robust in cases of model misspecification under the presence of outlying observations. It is well-known that a small deviation from the underlying assumptions on the model can have drastic effect on the performance of these classical tests. Specifically, this book presents a robust version of the classical Wald statistical test, for testing simple and composite null hypotheses for general parametric models, based on minimum divergence estimators.
n/a --- mixture index of fit --- Kullback-Leibler distance --- relative error estimation --- minimum divergence inference --- Neyman Pearson test --- influence function --- consistency --- thematic quality assessment --- asymptotic normality --- Hellinger distance --- nonparametric test --- Berstein von Mises theorem --- maximum composite likelihood estimator --- 2-alternating capacities --- efficiency --- corrupted data --- statistical distance --- robustness --- log-linear models --- representation formula --- goodness-of-fit --- general linear model --- Wald-type test statistics --- Hölder divergence --- divergence --- logarithmic super divergence --- information geometry --- sparse --- robust estimation --- relative entropy --- minimum disparity methods --- MM algorithm --- local-polynomial regression --- association models --- total variation --- Bayesian nonparametric --- ordinal classification variables --- Wald test statistic --- Wald-type test --- composite hypotheses --- compressed data --- hypothesis testing --- Bayesian semi-parametric --- single index model --- indoor localization --- composite minimum density power divergence estimator --- quasi-likelihood --- Chernoff Stein lemma --- composite likelihood --- asymptotic property --- Bregman divergence --- robust testing --- misspecified hypothesis and alternative --- least-favorable hypotheses --- location-scale family --- correlation models --- minimum penalized ?-divergence estimator --- non-quadratic distance --- robust --- semiparametric model --- divergence based testing --- measurement errors --- bootstrap distribution estimator --- generalized renyi entropy --- minimum divergence methods --- generalized linear model --- ?-divergence --- Bregman information --- iterated limits --- centroid --- model assessment --- divergence measure --- model check --- two-sample test --- Wald statistic --- Hölder divergence
Listing 1 - 7 of 7 |
Sort by
|