Listing 1 - 10 of 1469 | << page >> |
Sort by
|
Choose an application
Deep Learning through Sparse Representation and Low-Rank Modeling bridges classical sparse and low rank models—those that emphasize problem-specific Interpretability—with recent deep network models that have enabled a larger learning capacity and better utilization of Big Data. It shows how the toolkit of deep learning is closely tied with the sparse/low rank methods and algorithms, providing a rich variety of theoretical and analytic tools to guide the design and interpretation of deep learning models. The development of the theory and models is supported by a wide variety of applications in computer vision, machine learning, signal processing, and data mining. This book will be highly useful for researchers, graduate students and practitioners working in the fields of computer vision, machine learning, signal processing, optimization and statistics. Combines classical sparse and low-rank models and algorithms with the latest advances in deep learning networks Shows how the structure and algorithms of sparse and low-rank methods improves the performance and interpretability of Deep Learning models Provides tactics on how to build and apply customized deep learning models for various applications
Choose an application
Choose an application
"Traditional, low-dimensional, small scale data have been successfully dealt with using conventional software engineering and classical statistical methods, such as discriminant analysis, neural networks, genetic algorithms and others. But the change of scale in data collection and the dimensionality of modern data sets has profound implications on the type of analysis that can be done. Recently several kernel-based machine learning algorithms have been developed for dealing with high-dimensional problems, where a large number of features could cause a combinatorial explosion. These methods are quickly gaining popularity, and it is widely believed that they will help to meet the challenge of analysing very large data sets. Learning machines often perform well in a wide range of applications and have nice theoretical properties without requiring any parametric statistical assumption about the source of data (unlike traditional statistical techniques). However, a typical drawback of many machine learning algorithms is that they usually do not provide any useful measure of confidence in the predicted labels of new, unclassifed examples. Confidence estimation is a well-studied area of both parametric and non-parametric statistics; however, usually only low-dimensional problems are considered"--
Choose an application
Providing a unique picture of the complete in-the-wild biometric recognition processing chain, this book covers everything from data acquisition through to detection, segmentation, encoding, and matching reactions against security incidents. --
Choose an application
Choose an application
Source Separation and Machine Learning presents the fundamentals in adaptive learning algorithms for Blind Source Separation (BSS) and emphasizes the importance of machine learning perspectives. It illustrates how BSS problems are tackled through adaptive learning algorithms and model-based approaches using the latest information on mixture signals to build a BSS model that is seen as a statistical model for a whole system. Looking at different models, including independent component analysis (ICA), nonnegative matrix factorization (NMF), nonnegative tensor factorization (NTF), and deep neural network (DNN), the book addresses how they have evolved to deal with multichannel and single-channel source separation.
Choose an application
Choose an application
Choose an application
Choose an application
Annotation
Listing 1 - 10 of 1469 | << page >> |
Sort by
|