Listing 1 - 10 of 16 | << page >> |
Sort by
|
Choose an application
Choose an application
Choose an application
The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence.
Information technology industries --- information theory --- variational inference --- machine learning --- learnability --- information bottleneck --- representation learning --- conspicuous subset --- stochastic neural networks --- mutual information --- neural networks --- information --- bottleneck --- compression --- classification --- optimization --- classifier --- decision tree --- ensemble --- deep neural networks --- regularization methods --- information bottleneck principle --- deep networks --- semi-supervised classification --- latent space representation --- hand crafted priors --- learnable priors --- regularization --- deep learning
Choose an application
The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence.
Information technology industries --- information theory --- variational inference --- machine learning --- learnability --- information bottleneck --- representation learning --- conspicuous subset --- stochastic neural networks --- mutual information --- neural networks --- information --- bottleneck --- compression --- classification --- optimization --- classifier --- decision tree --- ensemble --- deep neural networks --- regularization methods --- information bottleneck principle --- deep networks --- semi-supervised classification --- latent space representation --- hand crafted priors --- learnable priors --- regularization --- deep learning
Choose an application
The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence.
information theory --- variational inference --- machine learning --- learnability --- information bottleneck --- representation learning --- conspicuous subset --- stochastic neural networks --- mutual information --- neural networks --- information --- bottleneck --- compression --- classification --- optimization --- classifier --- decision tree --- ensemble --- deep neural networks --- regularization methods --- information bottleneck principle --- deep networks --- semi-supervised classification --- latent space representation --- hand crafted priors --- learnable priors --- regularization --- deep learning
Choose an application
In the last decade, the number of clinical trials using Bayesian methods has grown dramatically. Nowadays, regulatory authorities appear to be more receptive to Bayesian methods than ever. The Bayesian methodology is well suited to address the issues arising in the planning, analysis, and conduct of clinical trials. Due to their flexibility, Bayesian design methods based on the accrued data of ongoing trials have been recommended by both the US Food and Drug Administration and the European Medicines Agency for dose-response trials in early clinical development. A distinctive feature of the Bayesian approach is its ability to deal with external information, such as historical data, findings from previous studies and expert opinions, through prior elicitation. In fact, it provides a framework for embedding and handling the variability of auxiliary information within the planning and analysis of the study. A growing body of literature examines the use of historical data to augment newly collected data, especially in clinical trials where patients are difficult to recruit, which is the case for rare diseases, for example. Many works explore how this can be done properly, since using historical data has been recognized as less controversial than eliciting prior information from experts’ opinions. In this book, applications of Bayesian design in the planning and analysis of clinical trials are introduced, along with methodological contributions to specific topics of Bayesian statistics. Finally, two reviews regarding the state-of-the-art of the Bayesian approach in clinical field trials are presented.
Humanities --- Social interaction --- dose-escalation --- combination study --- modelling assumption --- interaction --- adaptive designs --- adaptive randomization --- Bayesian designs --- clinical trials --- predictive power --- target allocation --- Bayesian inference --- highest posterior density intervals --- normal approximation --- predictive analysis --- sample size determination --- bayesian meta-analysis --- clustering --- binary data --- priors --- frequentist validation --- Bayesian --- rare disease --- prior distribution --- meta-analysis --- sample size --- bridging studies --- distribution distance --- oncology --- phase I --- dose-finding --- dose–response --- bayesian inference --- prior elicitation --- latent dirichlet allocation --- clinical trial --- power-prior --- poor accrual --- Bayesian trial --- cisplatin --- doxorubicin --- oxaliplatin --- dose escalation --- PIPAC --- peritoneal carcinomatosis --- randomized controlled trial --- causal inference --- doubly robust estimation --- propensity score --- Bayesian monitoring --- futility rules --- interim analysis --- posterior and predictive probabilities --- stopping boundaries --- Bayesian trial design --- early phase dose finding --- treatment combinations --- optimal dose combination
Choose an application
In the last decade, the number of clinical trials using Bayesian methods has grown dramatically. Nowadays, regulatory authorities appear to be more receptive to Bayesian methods than ever. The Bayesian methodology is well suited to address the issues arising in the planning, analysis, and conduct of clinical trials. Due to their flexibility, Bayesian design methods based on the accrued data of ongoing trials have been recommended by both the US Food and Drug Administration and the European Medicines Agency for dose-response trials in early clinical development. A distinctive feature of the Bayesian approach is its ability to deal with external information, such as historical data, findings from previous studies and expert opinions, through prior elicitation. In fact, it provides a framework for embedding and handling the variability of auxiliary information within the planning and analysis of the study. A growing body of literature examines the use of historical data to augment newly collected data, especially in clinical trials where patients are difficult to recruit, which is the case for rare diseases, for example. Many works explore how this can be done properly, since using historical data has been recognized as less controversial than eliciting prior information from experts’ opinions. In this book, applications of Bayesian design in the planning and analysis of clinical trials are introduced, along with methodological contributions to specific topics of Bayesian statistics. Finally, two reviews regarding the state-of-the-art of the Bayesian approach in clinical field trials are presented.
Humanities --- Social interaction --- dose-escalation --- combination study --- modelling assumption --- interaction --- adaptive designs --- adaptive randomization --- Bayesian designs --- clinical trials --- predictive power --- target allocation --- Bayesian inference --- highest posterior density intervals --- normal approximation --- predictive analysis --- sample size determination --- bayesian meta-analysis --- clustering --- binary data --- priors --- frequentist validation --- Bayesian --- rare disease --- prior distribution --- meta-analysis --- sample size --- bridging studies --- distribution distance --- oncology --- phase I --- dose-finding --- dose–response --- bayesian inference --- prior elicitation --- latent dirichlet allocation --- clinical trial --- power-prior --- poor accrual --- Bayesian trial --- cisplatin --- doxorubicin --- oxaliplatin --- dose escalation --- PIPAC --- peritoneal carcinomatosis --- randomized controlled trial --- causal inference --- doubly robust estimation --- propensity score --- Bayesian monitoring --- futility rules --- interim analysis --- posterior and predictive probabilities --- stopping boundaries --- Bayesian trial design --- early phase dose finding --- treatment combinations --- optimal dose combination
Choose an application
In the last decade, the number of clinical trials using Bayesian methods has grown dramatically. Nowadays, regulatory authorities appear to be more receptive to Bayesian methods than ever. The Bayesian methodology is well suited to address the issues arising in the planning, analysis, and conduct of clinical trials. Due to their flexibility, Bayesian design methods based on the accrued data of ongoing trials have been recommended by both the US Food and Drug Administration and the European Medicines Agency for dose-response trials in early clinical development. A distinctive feature of the Bayesian approach is its ability to deal with external information, such as historical data, findings from previous studies and expert opinions, through prior elicitation. In fact, it provides a framework for embedding and handling the variability of auxiliary information within the planning and analysis of the study. A growing body of literature examines the use of historical data to augment newly collected data, especially in clinical trials where patients are difficult to recruit, which is the case for rare diseases, for example. Many works explore how this can be done properly, since using historical data has been recognized as less controversial than eliciting prior information from experts’ opinions. In this book, applications of Bayesian design in the planning and analysis of clinical trials are introduced, along with methodological contributions to specific topics of Bayesian statistics. Finally, two reviews regarding the state-of-the-art of the Bayesian approach in clinical field trials are presented.
dose-escalation --- combination study --- modelling assumption --- interaction --- adaptive designs --- adaptive randomization --- Bayesian designs --- clinical trials --- predictive power --- target allocation --- Bayesian inference --- highest posterior density intervals --- normal approximation --- predictive analysis --- sample size determination --- bayesian meta-analysis --- clustering --- binary data --- priors --- frequentist validation --- Bayesian --- rare disease --- prior distribution --- meta-analysis --- sample size --- bridging studies --- distribution distance --- oncology --- phase I --- dose-finding --- dose–response --- bayesian inference --- prior elicitation --- latent dirichlet allocation --- clinical trial --- power-prior --- poor accrual --- Bayesian trial --- cisplatin --- doxorubicin --- oxaliplatin --- dose escalation --- PIPAC --- peritoneal carcinomatosis --- randomized controlled trial --- causal inference --- doubly robust estimation --- propensity score --- Bayesian monitoring --- futility rules --- interim analysis --- posterior and predictive probabilities --- stopping boundaries --- Bayesian trial design --- early phase dose finding --- treatment combinations --- optimal dose combination
Choose an application
Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.
Research & information: general --- Mathematics & science --- bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems
Choose an application
Extremely popular for statistical inference, Bayesian methods are also becoming popular in machine learning and artificial intelligence problems. Bayesian estimators are often implemented by Monte Carlo methods, such as the Metropolis–Hastings algorithm of the Gibbs sampler. These algorithms target the exact posterior distribution. However, many of the modern models in statistics are simply too complex to use such methodologies. In machine learning, the volume of the data used in practice makes Monte Carlo methods too slow to be useful. On the other hand, these applications often do not require an exact knowledge of the posterior. This has motivated the development of a new generation of algorithms that are fast enough to handle huge datasets but that often target an approximation of the posterior. This book gathers 18 research papers written by Approximate Bayesian Inference specialists and provides an overview of the recent advances in these algorithms. This includes optimization-based methods (such as variational approximations) and simulation-based methods (such as ABC or Monte Carlo algorithms). The theoretical aspects of Approximate Bayesian Inference are covered, specifically the PAC–Bayes bounds and regret analysis. Applications for challenging computational problems in astrophysics, finance, medical data analysis, and computer vision area also presented.
Research & information: general --- Mathematics & science --- bifurcation --- dynamical systems --- Edward–Sokal coupling --- mean-field --- Kullback–Leibler divergence --- variational inference --- Bayesian statistics --- machine learning --- variational approximations --- PAC-Bayes --- expectation-propagation --- Markov chain Monte Carlo --- Langevin Monte Carlo --- sequential Monte Carlo --- Laplace approximations --- approximate Bayesian computation --- Gibbs posterior --- MCMC --- stochastic gradients --- neural networks --- Approximate Bayesian Computation --- differential evolution --- Markov kernels --- discrete state space --- ergodicity --- Markov chain --- probably approximately correct --- variational Bayes --- Bayesian inference --- Markov Chain Monte Carlo --- Sequential Monte Carlo --- Riemann Manifold Hamiltonian Monte Carlo --- integrated nested laplace approximation --- fixed-form variational Bayes --- stochastic volatility --- network modeling --- network variability --- Stiefel manifold --- MCMC-SAEM --- data imputation --- Bethe free energy --- factor graphs --- message passing --- variational free energy --- variational message passing --- approximate Bayesian computation (ABC) --- differential privacy (DP) --- sparse vector technique (SVT) --- Gaussian --- particle flow --- variable flow --- Langevin dynamics --- Hamilton Monte Carlo --- non-reversible dynamics --- control variates --- thinning --- meta-learning --- hyperparameters --- priors --- online learning --- online optimization --- gradient descent --- statistical learning theory --- PAC–Bayes theory --- deep learning --- generalisation bounds --- Bayesian sampling --- Monte Carlo integration --- PAC-Bayes theory --- no free lunch theorems --- sequential learning --- principal curves --- data streams --- regret bounds --- greedy algorithm --- sleeping experts --- entropy --- robustness --- statistical mechanics --- complex systems
Listing 1 - 10 of 16 | << page >> |
Sort by
|