This spring, I’m teaching a graduate-level special topics course called “Mathematics of Data Science” at the Ohio State University. This will be a research-oriented class, and in lecture, I plan to cover some of the important ideas from convex optimization, probability, dimensionality reduction, clustering, and sparsity.

Click here for a draft of my lecture notes.

The current draft consists of a chapter on convex optimization. I will update the above link periodically. Feel free to comment below.

**UPDATE #1:** Lightly edited Chapter 1 and added a chapter on probability.

**UPDATE #2:** Lightly edited Chapter 2 and added a section on PCA.

**UPDATE #3:** Added a section on random projection.

**UPDATE #4:** Lightly edited Chapter 3. The semester is over, so I don’t plan to update these notes again until I teach a complementary special topics course next year.

**UPDATE #5:** As mentioned above, I’m teaching a complementary installment of this class this semester. I fixed several typos throughout, and I added a new section on embeddings from pairwise data.

**UPDATE #6:** Added a section on the clique problem.

**UPDATE #7:** Added a section on the Lovasz number.

**UPDATE #8:** Added a section on planted clique.

**UPDATE #9:** Added sections on maximum cut and minimum normalized cut.

**UPDATE #10:** Added a section on k-means clustering.

**UPDATE #11:** Started a chapter on compressed sensing.

**UPDATE #12:** Started a section on uniform guarantees.