A record of my notes as I go through the Deep Learning textbook.
A bit of context: Been dabbling in AI research but largely self-taught so far, without any formal computer science background. Mostly picking up things as and when I needed them but have always felt the need to fix my foundations so here's an attempt at that. With that in mind, my goal here is to go through the entire book from start to end, including the foundational math chapters at the beginning.
Unlike the Reinforcement Learning textbook (Sutton & Barto), the Deep Learning textbook does not contain any exercises. Instead, I will attempt to implement the algorithms in Python code when appropriate.
The annotations are made with reference to the hard copy of the book, which does not correspond to the web version. I will update all the pages when I finish the annotations.
- 1 Introduction
- 2 Linear Algebra
- 3 Probability and Information Theory
- 4 Numerical Computation
- 5 Machine Learning Basics
- 6 Deep Feedforward Networks
- 7 Regularization for Deep Learning
- 8 Optimization for Training Deep Models
- 9 Convolutional Networks
- 10 Sequence Modeling: Recurrent and Recursive Nets
- 11 Practical Methodology
- 12 Applications
- 13 Linear Factor Models
- 14 Autoencoders
- 15 Representation Learning
- 16 Structured Probabilistic Models for Deep Learning
- 17 Monte Carlo Methods
- 18 Confronting the Partition Function
- 19 Approximate Inference
- 20 Deep Generative Models