Listing 1 - 10 of 11 | << page >> |
Sort by
|
Choose an application
This Special Issue "Differential Geometrical Theory of Statistics" collates selected invited and contributed talks presented during the conference GSI'15 on "Geometric Science of Information" which was held at the Ecole Polytechnique, Paris-Saclay Campus, France, in October 2015 (Conference web site: http://www.see.asso.fr/gsi2015).
Hessian Geometry --- Shape Space --- Computational Information Geometry --- Statistical physics --- Entropy --- Cohomology --- Information geometry --- Thermodynamics --- Coding Theory --- Information topology --- Maximum entropy --- Divergence Geometry
Choose an application
This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience.
decomposable divergence --- tensor Sylvester matrix --- maximum pseudo-likelihood estimation --- matrix resultant --- ?) --- Markov random fields --- Fisher information --- Fisher information matrix --- Stein equation --- entropy --- Sylvester matrix --- information geometry --- stationary process --- (? --- dually flat structure --- information theory --- Bezout matrix --- Vandermonde matrix
Choose an application
This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience.
decomposable divergence --- tensor Sylvester matrix --- maximum pseudo-likelihood estimation --- matrix resultant --- ?) --- Markov random fields --- Fisher information --- Fisher information matrix --- Stein equation --- entropy --- Sylvester matrix --- information geometry --- stationary process --- (? --- dually flat structure --- information theory --- Bezout matrix --- Vandermonde matrix
Choose an application
This Special Issue of the journal Entropy, titled “Information Geometry I”, contains a collection of 17 papers concerning the foundations and applications of information geometry. Based on a geometrical interpretation of probability, information geometry has become a rich mathematical field employing the methods of differential geometry. It has numerous applications to data science, physics, and neuroscience. Presenting original research, yet written in an accessible, tutorial style, this collection of papers will be useful for scientists who are new to the field, while providing an excellent reference for the more experienced researcher. Several papers are written by authorities in the field, and topics cover the foundations of information geometry, as well as applications to statistics, Bayesian inference, machine learning, complex systems, physics, and neuroscience.
decomposable divergence --- tensor Sylvester matrix --- maximum pseudo-likelihood estimation --- matrix resultant --- ?) --- Markov random fields --- Fisher information --- Fisher information matrix --- Stein equation --- entropy --- Sylvester matrix --- information geometry --- stationary process --- (? --- dually flat structure --- information theory --- Bezout matrix --- Vandermonde matrix --- decomposable divergence --- tensor Sylvester matrix --- maximum pseudo-likelihood estimation --- matrix resultant --- ?) --- Markov random fields --- Fisher information --- Fisher information matrix --- Stein equation --- entropy --- Sylvester matrix --- information geometry --- stationary process --- (? --- dually flat structure --- information theory --- Bezout matrix --- Vandermonde matrix
Choose an application
Data science, information theory, probability theory, statistical learning and other related disciplines greatly benefit from non-negative measures of dissimilarity between pairs of probability measures. These are known as divergence measures, and exploring their mathematical foundations and diverse applications is of significant interest. The present Special Issue, entitled “Divergence Measures: Mathematical Foundations and Applications in Information-Theoretic and Statistical Problems”, includes eight original contributions, and it is focused on the study of the mathematical properties and applications of classical and generalized divergence measures from an information-theoretic perspective. It mainly deals with two key generalizations of the relative entropy: namely, the R_ényi divergence and the important class of f -divergences. It is our hope that the readers will find interest in this Special Issue, which will stimulate further research in the study of the mathematical foundations and applications of divergence measures.
Research & information: general --- Mathematics & science --- Bregman divergence --- f-divergence --- Jensen-Bregman divergence --- Jensen diversity --- Jensen-Shannon divergence --- capacitory discrimination --- Jensen-Shannon centroid --- mixture family --- information geometry --- difference of convex (DC) programming --- conditional Rényi divergence --- horse betting --- Kelly gambling --- Rényi divergence --- Rényi mutual information --- relative entropy --- chi-squared divergence --- f-divergences --- method of types --- large deviations --- strong data-processing inequalities --- information contraction --- maximal correlation --- Markov chains --- information inequalities --- mutual information --- Rényi entropy --- Carlson-Levin inequality --- information measures --- hypothesis testing --- total variation --- skew-divergence --- convexity --- Pinsker's inequality --- Bayes risk --- statistical divergences --- minimum divergence estimator --- maximum likelihood --- bootstrap --- conditional limit theorem --- Bahadur efficiency --- α-mutual information --- Augustin-Csiszár mutual information --- data transmission --- error exponents --- dimensionality reduction --- discriminant analysis --- statistical inference --- Bregman divergence --- f-divergence --- Jensen-Bregman divergence --- Jensen diversity --- Jensen-Shannon divergence --- capacitory discrimination --- Jensen-Shannon centroid --- mixture family --- information geometry --- difference of convex (DC) programming --- conditional Rényi divergence --- horse betting --- Kelly gambling --- Rényi divergence --- Rényi mutual information --- relative entropy --- chi-squared divergence --- f-divergences --- method of types --- large deviations --- strong data-processing inequalities --- information contraction --- maximal correlation --- Markov chains --- information inequalities --- mutual information --- Rényi entropy --- Carlson-Levin inequality --- information measures --- hypothesis testing --- total variation --- skew-divergence --- convexity --- Pinsker's inequality --- Bayes risk --- statistical divergences --- minimum divergence estimator --- maximum likelihood --- bootstrap --- conditional limit theorem --- Bahadur efficiency --- α-mutual information --- Augustin-Csiszár mutual information --- data transmission --- error exponents --- dimensionality reduction --- discriminant analysis --- statistical inference
Choose an application
The recent successes of machine learning, especially regarding systems based on deep neural networks, have encouraged further research activities and raised a new set of challenges in understanding and designing complex machine learning algorithms. New applications require learning algorithms to be distributed, have transferable learning results, use computation resources efficiently, convergence quickly on online settings, have performance guarantees, satisfy fairness or privacy constraints, incorporate domain knowledge on model structures, etc. A new wave of developments in statistical learning theory and information theory has set out to address these challenges. This Special Issue, "Machine Learning and Information Theory", aims to collect recent results in this direction reflecting a diverse spectrum of visions and efforts to extend conventional theories and develop analysis tools for these complex machine learning systems.
Technology: general issues --- History of engineering & technology --- supervised classification --- independent and non-identically distributed features --- analytical error probability --- empirical risk --- generalization error --- K-means clustering --- model compression --- population risk --- rate distortion theory --- vector quantization --- overfitting --- information criteria --- entropy --- model-based clustering --- merging mixture components --- component overlap --- interpretability --- time series prediction --- finite state machines --- hidden Markov models --- recurrent neural networks --- reservoir computers --- long short-term memory --- deep neural network --- information theory --- local information geometry --- feature extraction --- spiking neural network --- meta-learning --- information theoretic learning --- minimum error entropy --- artificial general intelligence --- closed-loop transcription --- linear discriminative representation --- rate reduction --- minimax game --- fairness --- HGR maximal correlation --- independence criterion --- separation criterion --- pattern dictionary --- atypicality --- Lempel–Ziv algorithm --- lossless compression --- anomaly detection --- information-theoretic bounds --- distribution and federated learning
Choose an application
Data science, information theory, probability theory, statistical learning and other related disciplines greatly benefit from non-negative measures of dissimilarity between pairs of probability measures. These are known as divergence measures, and exploring their mathematical foundations and diverse applications is of significant interest. The present Special Issue, entitled “Divergence Measures: Mathematical Foundations and Applications in Information-Theoretic and Statistical Problems”, includes eight original contributions, and it is focused on the study of the mathematical properties and applications of classical and generalized divergence measures from an information-theoretic perspective. It mainly deals with two key generalizations of the relative entropy: namely, the R_ényi divergence and the important class of f -divergences. It is our hope that the readers will find interest in this Special Issue, which will stimulate further research in the study of the mathematical foundations and applications of divergence measures.
Research & information: general --- Mathematics & science --- Bregman divergence --- f-divergence --- Jensen–Bregman divergence --- Jensen diversity --- Jensen–Shannon divergence --- capacitory discrimination --- Jensen–Shannon centroid --- mixture family --- information geometry --- difference of convex (DC) programming --- conditional Rényi divergence --- horse betting --- Kelly gambling --- Rényi divergence --- Rényi mutual information --- relative entropy --- chi-squared divergence --- f-divergences --- method of types --- large deviations --- strong data–processing inequalities --- information contraction --- maximal correlation --- Markov chains --- information inequalities --- mutual information --- Rényi entropy --- Carlson–Levin inequality --- information measures --- hypothesis testing --- total variation --- skew-divergence --- convexity --- Pinsker’s inequality --- Bayes risk --- statistical divergences --- minimum divergence estimator --- maximum likelihood --- bootstrap --- conditional limit theorem --- Bahadur efficiency --- α-mutual information --- Augustin–Csiszár mutual information --- data transmission --- error exponents --- dimensionality reduction --- discriminant analysis --- statistical inference --- n/a --- Jensen-Bregman divergence --- Jensen-Shannon divergence --- Jensen-Shannon centroid --- conditional Rényi divergence --- Rényi divergence --- Rényi mutual information --- strong data-processing inequalities --- Rényi entropy --- Carlson-Levin inequality --- Pinsker's inequality --- Augustin-Csiszár mutual information
Choose an application
Data science, information theory, probability theory, statistical learning and other related disciplines greatly benefit from non-negative measures of dissimilarity between pairs of probability measures. These are known as divergence measures, and exploring their mathematical foundations and diverse applications is of significant interest. The present Special Issue, entitled “Divergence Measures: Mathematical Foundations and Applications in Information-Theoretic and Statistical Problems”, includes eight original contributions, and it is focused on the study of the mathematical properties and applications of classical and generalized divergence measures from an information-theoretic perspective. It mainly deals with two key generalizations of the relative entropy: namely, the R_ényi divergence and the important class of f -divergences. It is our hope that the readers will find interest in this Special Issue, which will stimulate further research in the study of the mathematical foundations and applications of divergence measures.
Bregman divergence --- f-divergence --- Jensen–Bregman divergence --- Jensen diversity --- Jensen–Shannon divergence --- capacitory discrimination --- Jensen–Shannon centroid --- mixture family --- information geometry --- difference of convex (DC) programming --- conditional Rényi divergence --- horse betting --- Kelly gambling --- Rényi divergence --- Rényi mutual information --- relative entropy --- chi-squared divergence --- f-divergences --- method of types --- large deviations --- strong data–processing inequalities --- information contraction --- maximal correlation --- Markov chains --- information inequalities --- mutual information --- Rényi entropy --- Carlson–Levin inequality --- information measures --- hypothesis testing --- total variation --- skew-divergence --- convexity --- Pinsker’s inequality --- Bayes risk --- statistical divergences --- minimum divergence estimator --- maximum likelihood --- bootstrap --- conditional limit theorem --- Bahadur efficiency --- α-mutual information --- Augustin–Csiszár mutual information --- data transmission --- error exponents --- dimensionality reduction --- discriminant analysis --- statistical inference --- n/a --- Jensen-Bregman divergence --- Jensen-Shannon divergence --- Jensen-Shannon centroid --- conditional Rényi divergence --- Rényi divergence --- Rényi mutual information --- strong data-processing inequalities --- Rényi entropy --- Carlson-Levin inequality --- Pinsker's inequality --- Augustin-Csiszár mutual information
Choose an application
This book presents new and original research in Statistical Information Theory, based on minimum divergence estimators and test statistics, from a theoretical and applied point of view, for different statistical problems with special emphasis on efficiency and robustness. Divergence statistics, based on maximum likelihood estimators, as well as Wald’s statistics, likelihood ratio statistics and Rao’s score statistics, share several optimum asymptotic properties, but are highly non-robust in cases of model misspecification under the presence of outlying observations. It is well-known that a small deviation from the underlying assumptions on the model can have drastic effect on the performance of these classical tests. Specifically, this book presents a robust version of the classical Wald statistical test, for testing simple and composite null hypotheses for general parametric models, based on minimum divergence estimators.
n/a --- mixture index of fit --- Kullback-Leibler distance --- relative error estimation --- minimum divergence inference --- Neyman Pearson test --- influence function --- consistency --- thematic quality assessment --- asymptotic normality --- Hellinger distance --- nonparametric test --- Berstein von Mises theorem --- maximum composite likelihood estimator --- 2-alternating capacities --- efficiency --- corrupted data --- statistical distance --- robustness --- log-linear models --- representation formula --- goodness-of-fit --- general linear model --- Wald-type test statistics --- Hölder divergence --- divergence --- logarithmic super divergence --- information geometry --- sparse --- robust estimation --- relative entropy --- minimum disparity methods --- MM algorithm --- local-polynomial regression --- association models --- total variation --- Bayesian nonparametric --- ordinal classification variables --- Wald test statistic --- Wald-type test --- composite hypotheses --- compressed data --- hypothesis testing --- Bayesian semi-parametric --- single index model --- indoor localization --- composite minimum density power divergence estimator --- quasi-likelihood --- Chernoff Stein lemma --- composite likelihood --- asymptotic property --- Bregman divergence --- robust testing --- misspecified hypothesis and alternative --- least-favorable hypotheses --- location-scale family --- correlation models --- minimum penalized ?-divergence estimator --- non-quadratic distance --- robust --- semiparametric model --- divergence based testing --- measurement errors --- bootstrap distribution estimator --- generalized renyi entropy --- minimum divergence methods --- generalized linear model --- ?-divergence --- Bregman information --- iterated limits --- centroid --- model assessment --- divergence measure --- model check --- two-sample test --- Wald statistic --- Hölder divergence
Choose an application
The theory around the concept of finite time describes how processes of any nature can be optimized in situations when their rate is required to be non-negligible, i.e., they must come to completion in a finite time. What the theory makes explicit is “the cost of haste”. Intuitively, it is quite obvious that you drive your car differently if you want to reach your destination as quickly as possible as opposed to the case when you are running out of gas. Finite-time thermodynamics quantifies such opposing requirements and may provide the optimal control to achieve the best compromise. The theory was initially developed for heat engines (steam, Otto, Stirling, a.o.) and for refrigerators, but it has by now evolved into essentially all areas of dynamic systems from the most abstract ones to the most practical ones. The present collection shows some fascinating current examples.
Economics, finance, business & management --- macroentropy --- microentropy --- endoreversible engine --- reversible computing --- Landauer’s principle --- piston motion optimization --- endoreversible thermodynamics --- stirling engine --- irreversibility --- power --- efficiency --- optimization --- generalized radiative heat transfer law --- optimal motion path --- maximum work output --- elimination method --- finite time thermodynamics --- thermodynamics --- economics --- optimal processes --- n/a --- averaged --- heat transfer --- cyclic mode --- simulation --- modeling --- reconstruction --- nonequilibrium thermodynamics --- entropy production --- contact temperature --- quantum thermodynamics --- maximum power --- shortcut to adiabaticity --- quantum friction --- Otto cycle --- quantum engine --- quantum refrigerator --- finite-time thermodynamics --- sulfuric acid decomposition --- tubular plug-flow reactor --- entropy generation rate --- SO2 yield --- multi-objective optimization --- optimal control --- thermodynamic cycles --- thermodynamic length --- hydrogen atom --- nano-size engines --- a-thermal cycle --- heat engines --- cooling --- very long timescales --- slow time --- ideal gas law --- new and modified variables --- Silicon–Germanium alloys --- minimum of thermal conductivity --- efficiency of thermoelectric systems --- minimal energy dissipation --- radiative energy transfer --- radiative entropy transfer --- two-stream grey atmosphere --- energy flux density --- entropy flux density --- generalized winds --- conservatively perturbed equilibrium --- extreme value --- momentary equilibrium --- information geometry of thermodynamics --- thermodynamic curvature --- critical phenomena --- binary fluids --- van der Waals equation --- quantum heat engine --- carnot cycle --- otto cycle --- multiobjective optimization --- Pareto front --- stability --- maximum power regime --- entropy behavior --- biophysics --- biochemistry --- dynamical systems --- diversity --- complexity --- path information --- calorimetry --- entropy flow --- biological communities --- reacting systems --- Landauer's principle --- Silicon-Germanium alloys
Listing 1 - 10 of 11 | << page >> |
Sort by
|