Amazon cover image
Image from Amazon.com

Mathematical pictures at a data science exhibition

By: Foucart, Simon [Author]Material type: TextTextLanguage: English Publication details: United Kingdom: Cambridge University Press, 2022. Description: xx, 318p.; 22cmsISBN: 9781009001854Subject(s): Computer science -- Mathematics | Compressed sensing, optimization, and neural networks | Rudiments of Statistical LearningDDC classification: 005.7 F68
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Copy number Status Date due Barcode
Books Books Dr. S. R. Ranganathan Library
General Stacks
005.7 F68 (Browse shelf(Opens below)) 1 Available 2811
Books Books Dr. S. R. Ranganathan Library
General Stacks
005.7 F68:1 (Browse shelf(Opens below)) 2 Available 2812
Books Books Dr. S. R. Ranganathan Library
General Stacks
005.7 F68:2 (Browse shelf(Opens below)) 3 Available 2813
Books Books Dr. S. R. Ranganathan Library
General Stacks
005.7 F68:3 (Browse shelf(Opens below)) 4 Available 2814
Books Books Dr. S. R. Ranganathan Library
General Stacks
005.7 F68:4 (Browse shelf(Opens below)) 5 Available 2815

This text provides deep and comprehensive coverage of the mathematical background for data science, including machine learning, optimal recovery, compressed sensing, optimization, and neural networks. In the past few decades, heuristic methods adopted by big tech companies have complemented existing scientific disciplines to form the new field of Data Science. This text embarks the readers on an engaging itinerary through the theory supporting the field. Altogether, twenty-seven lecture-length chapters with exercises provide all the details necessary for a solid understanding of key topics in data science. While the book covers standard material on machine learning and optimization, it also includes distinctive presentations of topics such as reproducing kernel Hilbert spaces, spectral clustering, optimal recovery, compressed sensing, group testing, and applications of semidefinite programming. Students and data scientists with less mathematical background will appreciate the appendices that provide more background on some of the more abstract concepts.

Specially designed for mathematicians and graduate students in mathematics who want to learn more about data science
Presents a broad view of mathematical data science by including a wide variety of subjects, from the very popular subject of machine learning to the lesser-known subject of optimal recovery
Proves at least one theoretical result in each chapter, helping the reader develop a sound understanding of topics explained with detailed arguments
Includes original content that has never been published before in book form, such as the presentation of compressive sensing through a nonstandard restricted isometry property
Provides background for some of the more abstract concepts in the appendices
Author's GitHub page includes computational illustrations made in MATLAB and Python to demonstrate how the theory is applied

Part I. Machine Learning:

1. Rudiments of Statistical Learning
2. Vapnik–Chervonenkis Dimension
3. Learnability for Binary Classification
4. Support Vector Machines
5. Reproducing Kernel Hilbert
6. Regression and Regularization
7. Clustering
8. Dimension Reduction
Part II Optimal Recovery:
9. Foundational Results of Optimal Recovery
10. Approximability Models
11. Ideal Selection of Observation Schemes
12. Curse of Dimensionality
13. Quasi-Monte Carlo Integration
Part III Compressive Sensing:
14. Sparse Recovery from Linear Observations
15. The Complexity of Sparse Recovery
16. Low-Rank Recovery from Linear Observations
17. Sparse Recovery from One-Bit Observations
18. Group Testing
Part IV Optimization:
19. Basic Convex Optimization
20. Snippets of Linear Programming
21. Duality Theory and Practice
22. Semidefinite Programming in Action
23. Instances of Nonconvex Optimization
Part V Neural Networks:
24. First Encounter with ReLU Networks
25. Expressiveness of Shallow Networks
26. Various Advantages of Depth
27. Tidbits on Neural Network Training
Appendix A
High-Dimensional Geometry
Appendix B. Probability Theory
Appendix C. Functional Analysis
Appendix D. Matrix Analysis
Appendix E. Approximation Theory


Implemented and Maintained by Dr. S.R. Ranganathan Library.
For any Suggestions/Query Contact to library or Email: library@iipe.ac.in
Website/OPAC best viewed in Mozilla Browser in 1366X768 Resolution.

Powered by Koha