Linear Algebra (MIT 18.06)

Gilbert Strang
Video Series beginner Free ~35 lectures
Summary

Gilbert Strang's full MIT 18.06 course. The single best place to learn linear algebra well enough to read ML papers without flinching.


Strang is famous for a reason. He teaches linear algebra geometrically first and algebraically second, which is the right order for anyone who’s going to use it. By lecture 5 you understand what column space actually means; by lecture 15 you understand SVD better than most people who’ve used it for years. The course is old (2010) but the math hasn’t moved.

For ML specifically, the lectures you absolutely cannot skip are the ones on projections, eigenvalues and eigenvectors, the four fundamental subspaces, and SVD. These map directly onto things you’ll meet later: PCA is just an SVD with a centering step, attention is a projection, and gradient descent lives entirely inside the column space of your design matrix.

If you’re short on time, watch lectures 1, 3, 5, 6, 9, 10, 14, 15, 21, and 29. That’s the spine. Skip the determinants block — you’ll almost never compute one by hand in real ML work, and the abstract version is more trouble than it’s worth at this stage.

Related resources