TY - BOOK ID - 145732923 TI - Information Bottleneck : Theory and Applications in Deep Learning AU - Geiger, Bernhard AU - Kubin, Gernot PY - 2021 PB - Basel, Switzerland MDPI - Multidisciplinary Digital Publishing Institute DB - UniCat KW - Information technology industries KW - information theory KW - variational inference KW - machine learning KW - learnability KW - information bottleneck KW - representation learning KW - conspicuous subset KW - stochastic neural networks KW - mutual information KW - neural networks KW - information KW - bottleneck KW - compression KW - classification KW - optimization KW - classifier KW - decision tree KW - ensemble KW - deep neural networks KW - regularization methods KW - information bottleneck principle KW - deep networks KW - semi-supervised classification KW - latent space representation KW - hand crafted priors KW - learnable priors KW - regularization KW - deep learning KW - information theory KW - variational inference KW - machine learning KW - learnability KW - information bottleneck KW - representation learning KW - conspicuous subset KW - stochastic neural networks KW - mutual information KW - neural networks KW - information KW - bottleneck KW - compression KW - classification KW - optimization KW - classifier KW - decision tree KW - ensemble KW - deep neural networks KW - regularization methods KW - information bottleneck principle KW - deep networks KW - semi-supervised classification KW - latent space representation KW - hand crafted priors KW - learnable priors KW - regularization KW - deep learning UR - https://www.unicat.be/uniCat?func=search&query=sysid:145732923 AB - The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information–theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence. ER -