Listing 1 - 10 of 145 | << page >> |
Sort by
|
Choose an application
Choose an application
This text details advances in learning theory that relate to problems studied in neural networks, machine learning, mathematics and statistics.
Artificial intelligence. Robotics. Simulation. Graphics --- Mathematical statistics --- Computational learning theory - Congresses. --- Engineering & Applied Sciences --- Computer Science --- Computational learning theory --- Machine learning --- Mathematical models --- Learning, Machine --- Artificial intelligence --- Machine theory --- E-books
Choose an application
This book focuses on Least Squares Support Vector Machines (LS-SVMs) which are reformulations to standard SVMs. LS-SVMs are closely related to regularization networks and Gaussian processes but additionally emphasize and exploit primal-dual interpretations from optimization theory. The authors explain the natural links between LS-SVM classifiers and kernel Fisher discriminant analysis. Bayesian inference of LS-SVM models is discussed, together with methods for imposing sparseness and employing robust statistics. The framework is further extended towards unsupervised learning by considering P
517.5 --- 681.3*C12 --- 681.3*G12 --- 681.3*G12 Approximation: chebyshev elementary function least squares linear approximation minimax approximation and algorithms nonlinear and rational approximation spline and piecewise polynomial approximation (Numerical analysis) --- Approximation: chebyshev elementary function least squares linear approximation minimax approximation and algorithms nonlinear and rational approximation spline and piecewise polynomial approximation (Numerical analysis) --- 517.5 Theory of functions --- Theory of functions --- 681.3*C12 Multiple data stream architectures (multiprocessors): MIMD SIMD pipeline and parallel processors array-, vector-, associative processors interconnection architectures: common bus, multiport memory, crossbar switch --- Multiple data stream architectures (multiprocessors): MIMD SIMD pipeline and parallel processors array-, vector-, associative processors interconnection architectures: common bus, multiport memory, crossbar switch --- Moindres carrés --- Method of least squares --- Squares, Least --- Curve fitting --- Triangulation --- Multiple data stream architectures (multiprocessors): MIMD; SIMD; pipeline and parallel processors; array-, vector-, associative processors; interconnection architectures: common bus, multiport memory, crossbar switch --- Approximation: chebyshev; elementary function; least squares; linear approximation; minimax approximation and algorithms; nonlinear and rational approximation; spline and piecewise polynomial approximation (Numerical analysis) --- 681.3*G12 Approximation: chebyshev; elementary function; least squares; linear approximation; minimax approximation and algorithms; nonlinear and rational approximation; spline and piecewise polynomial approximation (Numerical analysis) --- 681.3*C12 Multiple data stream architectures (multiprocessors): MIMD; SIMD; pipeline and parallel processors; array-, vector-, associative processors; interconnection architectures: common bus, multiport memory, crossbar switch --- Moindres carrés --- Machine learning. --- Algorithms. --- Kernel functions. --- Least squares. --- Functions, Kernel --- Algorism --- Learning, Machine --- AA / International- internationaal --- 303.5 --- 305.976 --- 305.971 --- Theorie van correlatie en regressie. (OLS, adjusted LS, weighted LS, restricted LS, GLS, SLS, LIML, FIML, maximum likelihood). Parametric and non-parametric methods and theory (wiskundige statistiek). --- Algoritmen. Optimisatie. --- Speciale gevallen in econometrische modelbouw. --- Planning (firm) --- Operational research. Game theory --- Geodesy --- Mathematical statistics --- Mathematics --- Probabilities --- Functions of complex variables --- Geometric function theory --- Algebra --- Arithmetic --- Artificial intelligence --- Machine theory --- Foundations --- Machine learning --- Algorithms --- Kernel functions --- Least squares --- E-books --- Support vector machines. --- Apprentissage automatique --- Algorithmes --- Noyaux (Mathématiques) --- Theorie van correlatie en regressie. (OLS, adjusted LS, weighted LS, restricted LS, GLS, SLS, LIML, FIML, maximum likelihood). Parametric and non-parametric methods and theory (wiskundige statistiek) --- Algoritmen. Optimisatie --- Speciale gevallen in econometrische modelbouw
Choose an application
Neural networks (Computer science) --- Neurale netwerken (Informatica) --- Niet-lineaire systemen --- Nonlinear systems --- Réseaux neuraux (Informatique) --- Systèmes non linéaires --- Réseaux neuronaux (Informatique) --- Nonlinear systems. --- Neural networks (Computer science). --- Réseaux neuronaux (Informatique) --- Systèmes non linéaires
Choose an application
Choose an application
Many practical problems and applications of data driven modelling are embedded into our daily lives and support decision making in various business and industry domains. Proliferation of data lakes and exponentially growing data volume is a source of new challenges in the machine learning and data governance fields. This immense amount of data has resulted in massive web-scale sources of information and very large datasets. The latter has led the industry and science to an emerging concept of Big Data. Effective analysis, understanding and learning from such sources of data using flexible software tools are among primary goals of this thesis, related to kernel-based and sparse linear modelling. Kernel-based methods like Support Vector Machines (SVM) and Least-Squares Support Vector Machines (LSSVM) are among the most popular machine learning techniques for solving complex classification, regression, clustering or correlation analysis tasks. These techniques are capable of handling nonlinear mappings from the input space to the function (solution) space and dealing with non-stationary data. In this thesis we focus on the kernel-based methods for unsupervised and semi-supervised learning. Among unsupervised techniques we study density estimation problems for fault, anomaly and novelty detection. The latter problem is investigated by the Multi-Class Supervised Novelty Detection (SND) method by which we combine ideas of One-Class Support Vector Machines with a novel multi-class classification approach which is tackling discrimination between modelled i.i.d. distributions and corresponding densities. Within semi-supervised techniques we have developed an extension of the Kernel Spectral Clustering (KSC) framework tailored towards learning from both labeled and unlabeled data using very similar to the SND approach ideas. Another contribution of this thesis is related to a study of a stochastic learning paradigm. In this particular setting one deals with randomly drawn subsamples of the entire dataset. With respect to a given subsample, we re-iteratively update our solution in order to achieve in expectation a nearly optimal result. We study a stochastic optimization and learning for classification, regression and clustering problems. Along these lines we develop several extensions of the Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) algorithm and some specific reweighted modifications for dual averaging schemes with primal-dual iterate updates. Within the first approach we modify the Pegasos algorithm by incorporating novel and interesting loss functions, like pinball loss which is robust in the presence of outliers. Another studied modification of the Pegasos algorithm is represented by the usage of the Nyström approximated feature space where we obtain nonlinear decision boundaries. Within the second direction we implement very sparse linear classification and regression models using the regularized dual averaging framework. This framework embodies a notion of different regularization schemes, for instance elastic-net, l1-regularization etc. As a novel contribution to this framework we propose reweighted l1- and l2-regularization schemes. Currently there are many machine learning software packages available for the end user but the majority of such solutions is merely intended for the use of novice practitioners and cannot be adopted by the out-of-field scientists. One might consider the ultimate necessity for a simple self-explanatory software design and user-friendly usage patterns where sophisticated machine learning methods are wrapped by the out-of-box tuning, cross-validation and evaluation procedures. One of the essential contributions of this thesis is presented by the SALSA.jl (Software lab for Advanced machine Learning with Stochastic Algorithms in Julia) software library. It combines kernel-based and sparse linear modelling with stochastic learning methods. It uses advanced software design principles for elaborating scalable, robust and user-friendly black-box modelling library. The latter is one of the major contributions of this thesis.
Choose an application
This first volume of a four volume set, edited and authored by world leading experts, gives a review of the principles, methods and techniques of important and emerging research topics and technologies in machine learning and advanced signal processing theory. With this reference source you will: Quickly grasp a new area of research Understand the underlying principles of a topic and its applicationAscertain how a topic relates to other areas and learn of the research issues yet to be resolvedQuick tutorial reviews of important
Signal processing. --- Machine learning. --- Learning, Machine --- Artificial intelligence --- Machine theory --- Processing, Signal --- Information measurement --- Signal theory (Telecommunication)
Choose an application
For engineering applications that are based on nonlinear phenomena, novel information processing systems require new methodologies and design principles. This perspective is the basis of the three cornerstones of this book: cellular neural networks, chaos and synchronization. Cellular neural networks and their universal machine implementations offer a well-established platform for processing spatial-temporal patterns and wave computing. Multi-scroll circuits are generalizations to the original Chua's circuit, leading to chip implementable circuits with increasingly complex attractors. Several
Neural networks (Computer science) --- Nonlinear systems. --- Chaotic behavior in systems. --- Synchronization. --- Computer engineering. --- Computers --- Synchronism --- Time measurements --- Chaos in systems --- Chaos theory --- Chaotic motion in systems --- Differentiable dynamical systems --- Dynamics --- Nonlinear theories --- System theory --- Systems, Nonlinear --- Artificial neural networks --- Nets, Neural (Computer science) --- Networks, Neural (Computer science) --- Neural nets (Computer science) --- Artificial intelligence --- Natural computation --- Soft computing --- Design and construction --- Neural networks (Computer science).
Choose an application
In this internship report, we investigate whether the state-of-the-art deep learning approaches for image segmentation can outperform the rule-based approach currently used in-house at Geotakas for classifying the landcover from remote sensing imagery. An open-source dataset based on Sentinel-2 imagery is selected for training, and different cloud filtering approaches are tried out. To create the landcover segmentations, the project explores two approaches: 1) Fine-tuning four pretrained model backbones on satellite RGB imagery using a U-Net architecture, and 2) Training a U-Net model from scratch by incorporating both RGB and near-infrared channels. Both the fine-tuned model and the model trained from scratch outperform the rule-based model. Despite the promising performance of deep learning-based models for landcover segmentation, training data availability and hardware requirements pose a challenge for adoption of the deep learning solution.
Choose an application
Listing 1 - 10 of 145 | << page >> |
Sort by
|