Narrow your search

Library

FARO (2)

KU Leuven (2)

LUCA School of Arts (2)

Odisee (2)

Thomas More Kempen (2)

Thomas More Mechelen (2)

UCLL (2)

ULiège (2)

VIVES (2)

Vlaams Parlement (2)

More...

Resource type

book (5)


Language

English (5)


Year
From To Submit

2022 (5)

Listing 1 - 5 of 5
Sort by

Book
Computational Optimizations for Machine Learning
Author:
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity.


Book
Computational Optimizations for Machine Learning
Author:
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

The present book contains the 10 articles finally accepted for publication in the Special Issue “Computational Optimizations for Machine Learning” of the MDPI journal Mathematics, which cover a wide range of topics connected to the theory and applications of machine learning, neural networks and artificial intelligence. These topics include, among others, various types of machine learning classes, such as supervised, unsupervised and reinforcement learning, deep neural networks, convolutional neural networks, GANs, decision trees, linear regression, SVM, K-means clustering, Q-learning, temporal difference, deep adversarial networks and more. It is hoped that the book will be interesting and useful to those developing mathematical algorithms and applications in the domain of artificial intelligence and machine learning as well as for those having the appropriate mathematical background and willing to become familiar with recent advances of machine learning computational optimization mathematics, which has nowadays permeated into almost all sectors of human life and activity.

Keywords

Research & information: general --- Mathematics & science --- ARIMA model --- time series analysis --- online optimization --- online model selection --- precipitation nowcasting --- deep learning --- autoencoders --- radar data --- generalization error --- recurrent neural networks --- machine learning --- model predictive control --- nonlinear systems --- neural networks --- low power --- quantization --- CNN architecture --- multi-objective optimization --- genetic algorithms --- evolutionary computation --- swarm intelligence --- Heating, Ventilation and Air Conditioning (HVAC) --- metaheuristics search --- bio-inspired algorithms --- smart building --- soft computing --- training --- evolution of weights --- artificial intelligence --- deep neural networks --- convolutional neural network --- deep compression --- DNN --- ReLU --- floating-point numbers --- hardware acceleration --- energy dissipation --- FLOW-3D --- hydraulic jumps --- bed roughness --- sensitivity analysis --- feature selection --- evolutionary algorithms --- nature inspired algorithms --- meta-heuristic optimization --- computational intelligence --- ARIMA model --- time series analysis --- online optimization --- online model selection --- precipitation nowcasting --- deep learning --- autoencoders --- radar data --- generalization error --- recurrent neural networks --- machine learning --- model predictive control --- nonlinear systems --- neural networks --- low power --- quantization --- CNN architecture --- multi-objective optimization --- genetic algorithms --- evolutionary computation --- swarm intelligence --- Heating, Ventilation and Air Conditioning (HVAC) --- metaheuristics search --- bio-inspired algorithms --- smart building --- soft computing --- training --- evolution of weights --- artificial intelligence --- deep neural networks --- convolutional neural network --- deep compression --- DNN --- ReLU --- floating-point numbers --- hardware acceleration --- energy dissipation --- FLOW-3D --- hydraulic jumps --- bed roughness --- sensitivity analysis --- feature selection --- evolutionary algorithms --- nature inspired algorithms --- meta-heuristic optimization --- computational intelligence


Book
Approximate Bayesian Inference
Author:
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.

Keywords

Research & information: general --- Mathematics & science --- bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems --- bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems


Book
Approximate Bayesian Inference
Author:
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.

Keywords

Research & information: general --- Mathematics & science --- bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems


Book
Approximate Bayesian Inference
Author:
Year: 2022 Publisher: Basel MDPI - Multidisciplinary Digital Publishing Institute

Loading...
Export citation

Choose an application

Bookmark

Abstract

Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.

Keywords

bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems

Listing 1 - 5 of 5
Sort by