TY - BOOK ID - 134127759 TI - Approximate Bayesian Inference PY - 2022 PB - Basel MDPI - Multidisciplinary Digital Publishing Institute DB - UniCat KW - Research & information: general KW - Mathematics & science KW - bifurcation KW - dynamical systems KW - Edward–Sokal coupling KW - mean-field KW - Kullback–Leibler divergence KW - variational inference KW - Bayesian statistics KW - machine learning KW - variational approximations KW - PAC-Bayes KW - expectation-propagation KW - Markov chain Monte Carlo KW - Langevin Monte Carlo KW - sequential Monte Carlo KW - Laplace approximations KW - approximate Bayesian computation KW - Gibbs posterior KW - MCMC KW - stochastic gradients KW - neural networks KW - Approximate Bayesian Computation KW - differential evolution KW - Markov kernels KW - discrete state space KW - ergodicity KW - Markov chain KW - probably approximately correct KW - variational Bayes KW - Bayesian inference KW - Markov Chain Monte Carlo KW - Sequential Monte Carlo KW - Riemann Manifold Hamiltonian Monte Carlo KW - integrated nested laplace approximation KW - fixed-form variational Bayes KW - stochastic volatility KW - network modeling KW - network variability KW - Stiefel manifold KW - MCMC-SAEM KW - data imputation KW - Bethe free energy KW - factor graphs KW - message passing KW - variational free energy KW - variational message passing KW - approximate Bayesian computation (ABC) KW - differential privacy (DP) KW - sparse vector technique (SVT) KW - Gaussian KW - particle flow KW - variable flow KW - Langevin dynamics KW - Hamilton Monte Carlo KW - non-reversible dynamics KW - control variates KW - thinning KW - meta-learning KW - hyperparameters KW - priors KW - online learning KW - online optimization KW - gradient descent KW - statistical learning theory KW - PAC–Bayes theory KW - deep learning KW - generalisation bounds KW - Bayesian sampling KW - Monte Carlo integration KW - PAC-Bayes theory KW - no free lunch theorems KW - sequential learning KW - principal curves KW - data streams KW - regret bounds KW - greedy algorithm KW - sleeping experts KW - entropy KW - robustness KW - statistical mechanics KW - complex systems UR - https://www.unicat.be/uniCat?func=search&query=sysid:134127759 AB - Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented. ER -