Narrow your search

Library

KU Leuven (2)

National Bank of Belgium (1)

UCLouvain (1)

UGent (1)

Vlerick Business School (1)


Resource type

book (3)


Language

English (3)


Year
From To Submit

2014 (2)

2012 (1)

Listing 1 - 3 of 3
Sort by

Book
Advanced structured prediction
Author:
ISBN: 9780262028370 9780262322959 0262322951 0262028379 Year: 2014 Publisher: Cambridge (Mass.) : MIT press,


Book
Advanced structured prediction
Author:
ISBN: 026232296X Year: 2014 Publisher: Cambridge, MA : The MIT Press,

Loading...
Export citation

Choose an application

Bookmark

Abstract

The goal of structured prediction is to build machine learning models that predict relational information that itself has structure, such as being composed of multiple interrelated parts. These models, which reflect prior knowledge, task-specific relations, and constraints, are used in fields including computer vision, speech recognition, natural language processing, and computational biology. They can carry out such tasks as predicting a natural language sentence, or segmenting an image into meaningful components. These models are expressive and powerful, but exact computation is often intractable. A broad research effort in recent years has aimed at designing structured prediction models and approximate inference and learning procedures that are computationally efficient. This volume offers an overview of this recent research in order to make the work accessible to a broader research community. The chapters, by leading researchers in the field, cover a range of topics, including research trends, the linear programming relaxation approach, innovations in probabilistic modeling, recent theoretical progress, and resource-aware learning.Sebastian Nowozin is a Researcher in the Machine Learning and Perception group (MLP) at Microsoft Research, Cambridge, England. Peter V. Gehler is a Senior Researcher in the Perceiving Systems group at the Max Planck Institute for Intelligent Systems, Tubingen, Germany. Jeremy Jancsary is a Senior Research Scientist at Nuance Communications, Vienna. Christoph H. Lampert is Assistant Professor at the Institute of Science and Technology Austria, where he heads a group for Computer Vision and Machine Learning. Contributors Jonas Behr, Yutian Chen, Fernando De La Torre, Justin Domke, Peter V. Gehler, Andrew E. Gelfand, S©♭bastien Gigu©·re, Amir Globerson, Fred A. Hamprecht, Minh Hoai, Tommi Jaakkola, Jeremy Jancsary, Joseph Keshet, Marius Kloft, Vladimir Kolmogorov, Christoph H. Lampert, Fran©ʹois Laviolette, Xinghua Lou, Mario Marchand, Andr©♭ F. T. Martins, Ofer Meshi, Sebastian Nowozin, George Papandreou, Daniel Pru - Ła, Gunnar R©Þtsch, Am©♭lie Rolland, Bogdan Savchynskyy, Stefan Schmidt, Thomas Schoenemann, Gabriele Schweikert, Ben Taskar, Sinisa Todorovic, Max Welling, David Weiss, Thom©Ł - Ł Werner, Alan Yuille, Stanislav - ưivn©ư.


Book
Optimization for machine learning
Authors: --- ---
ISBN: 9780262298773 0262298775 1283302845 9781283302845 9780262016469 026201646X 0262297892 9786613302847 Year: 2012 Publisher: Cambridge: MIT Press,

Loading...
Export citation

Choose an application

Bookmark

Abstract

An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities.The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields.Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.

Listing 1 - 3 of 3
Sort by