Back to Search

Large-Scale Kernel Machines

PUBLISHER MIT Press (08/17/2007)
PRODUCT TYPE Hardcover (Hardcover)

Description

Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets.

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.

Contributors
L on Bottou, Yoshua Bengio, St phane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Ga lle Loosli, Joaquin Qui onero-Candela, Carl Edward Rasmussen, Gunnar R tsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, S ren Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-Tov

Show More
Product Format
Product Details
ISBN-13: 9780262026253
ISBN-10: 0262026252
Binding: Hardback or Cased Book (Sewn)
Content Language: English
More Product Details
Page Count: 396
Carton Quantity: 18
Product Dimensions: 8.41 x 1.07 x 10.03 inches
Weight: 2.17 pound(s)
Feature Codes: Bibliography, Index, Dust Cover, Table of Contents, Illustrated
Country of Origin: US
Subject Information
BISAC Categories
Computers | Computer Science
Grade Level: College Freshman and up
Dewey Decimal: 005.73
Library of Congress Control Number: 2007000980
Descriptions, Reviews, Etc.
publisher marketing

Solutions for learning from large scale datasets, including kernel learning algorithms that scale linearly with the volume of the data and experiments carried out on realistically large datasets.

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.

Contributors
L on Bottou, Yoshua Bengio, St phane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Ga lle Loosli, Joaquin Qui onero-Candela, Carl Edward Rasmussen, Gunnar R tsch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, S ren Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-Tov

Show More

Editor: Chapelle, Olivier
Olivier Chapelle is Senior Research Scientist in Machine Learning at Yahoo.
Show More

Editor: Bottou, Leon
Leon Bottou is Senior Research Scientist at NEC Laboratories America in Princeton, New Jersey, and Publications Chair of the 2004 NIPS conference.
Show More
List Price $9.99
Your Price  $9.89
Hardcover