In this episode, Grant Sanderson (GS) interviews Alex Kantorovich (AK) about all things academia, and what follows is some commentary by John Jasper (JJ), Hans Parshall (HP), and me (DM). Feel free to discuss further in the comments.
Two years ago, I blogged about Tuan-Yow Chien’s PhD thesis, which proved Zauner’s conjecture in dimension 17. The idea was to exploit certain conjectures on the field structure of SIC-POVM fiducial vectors so as to round numerical solutions to exact solutions. This week, the arXiv announced Chien’s latest paper (coauthored with Appleby, Flammia and Waldron), which extends this work to find exact solutions in 8 new dimensions.
The following line from the introduction caught my eye:
For instance the print-out for exact fiducial 48a occupies almost a thousand A4 pages (font size 9 and narrow margins).
As my previous blog entry illustrated, the description length of SIC-POVM fiducial vectors appears to grow rapidly with . However, it seems that the rate of growth is much better than I originally thought. Here’s a plot of the description lengths of the known fiducial vectors (the new ones due to ACFW17 — available here — appear in red):
I got exactly what I wanted for Christmas this year! This book is great, and I highly recommend it:
True story: One evening in 1996, I remember watching the news with my parents, and the program concluded with a “Persons of the Week” segment, in which the winner of the Westinghouse Science Talent Search was interviewed. Jacob Lurie‘s winning research investigated a certain collection of numbers that, at the time, didn’t seem terribly exciting to me. I asked my parents, “What’s so interesting about serial numbers?” After laughing at my honest confusion, my parents offered some explanation: “He’s talking about surreal numbers, not serial numbers.” But in the absence of wikipedia, no further explanation could be provided.
I’ve been thinking a lot about my place in the world lately. I’m interested in doing math that makes a difference, and considering much of the breakthroughs in our society have come from various startups, I decided to investigate the startup culture. How might academia benefit from startup culture? One could easily imagine a hip research environment adorned with beanbag chairs and foosball tables, but these perks aren’t the stuff that makes a startup successful. To catch a glimpse, I turned to a book recently written by Peter Thiel (of PayPal fame):
I recently finished Nate Silver‘s famous book. Some parts were more fun to read than others, but overall, it was worth the read. I was impressed by Nate’s apparently vast perspective, and he did a good job of pointing out how bad we are at predicting certain things (and explaining some of the bottlenecks).
Based on the reading, here’s a brief list of stars that need to align in order to succeed at prediction:
Jordan introduces the book with a college student asking why math is so important. This book is an answer of sorts: It provides a bunch of simple, yet profound morsels of mathematical thinking. Actually, most of Jordan’s examples reveal how to properly think about the sort of math you might encounter in a newspaper (e.g., statistical significance that balding signifies future prostate cancer). I wonder if this book could form the basis of a “Math for the Humanities” type of class. Such a class might have more to offer the non-technical college student than calculus would. Overall, I highly recommend this book to everyone (including my mathematically disoriented wife).
Phase transitions are very common in modern data analysis (see this paper, for example). The idea is that, for a given task whose inputs are random linear measurements, there is often a magic number of measurements such that if you input measurements, the task will typically fail, but if you input measurements, the task will typically succeed. As a toy example, suppose the task is to reconstruct an arbitrary vector in from its inner products with random vectors; of course, in this case. There are many tasks possible with data analysis, signal processing, etc., and it’s interesting to see what phase transitions emerge in these cases. The following paper introduces some useful techniques for exactlycharacterizing phase transitions of tasks involving convex optimization:
I think most would agree that the way we do math research has completely changed with technology in the last few decades. Today, I type research notes in LaTeX, I run simulations in MATLAB and Mathematica, I email with collaborators on a daily basis, I read the arXiv and various math blogs to keep in the know, and when I get stuck on something that’s a little outside my expertise, I ask a question on MathOverflow. With this in mind, can you think of another step we can take with technology that will revolutionize the way we conduct math research? It might sound ambitious, but the following paper is looking to make one such step:
The vision of this paper is to make automated provers extremely mathematician-friendly so that they can be used on a day-to-day basis to help prove various lemmas and theorems. Today, we might use computers as a last resort to verify a slew of cases (e.g., to prove the four color theorem or the Kepler conjecture). The hope is that in the future, we will be able to seamlessly interface with computers to efficiently implement human-type logic (imagine HAL 9000 as a collaborator).
The paper balances its ambitious vision with a modest scope: The goal is to produce an algorithm which emulates the way human mathematicians (1) prove some of the simplest results that might appear in undergraduate-level math homework, and (2) write the proofs in LaTeX. To be fair, the only thing modest about this scope is the hardness of the results that are attempted, and as a first step toward the overall vision, this simplification is certainly acceptable. (Examples of attempted results include “a closed subset of a complete metric space is complete” and “the intersection of open sets is open.”)
A couple of months ago, Afonso recommended I read this new book by Leslie Valiant. In fact, this book had already been recommended by one of the blogs I follow, so I decided it must be worth the read. The purpose of this blog entry is to document some notes that I took while reading this fascinating book.
Considering the well-established theory of computability (and computational complexity), in hindsight, it seems rather natural to seek a theory of the learnable. What sort of things are inherently learnable, and efficiently so? In order to build a solid theory of learnability, we first need to make some assumptions. Valiant proposes two such assumptions:
1. The Invariance Assumption. Data collected for the learning phase comes from the same source as the data collected for the testing phase. Otherwise, the experiences you learn from are irrelevant to the situations you apply your learning to. In the real world, non-periodic changes in our environment are much slower than our learning processes, and so this is a reasonable assumption for human learning. As for machine learning, the validity of this assumption depends on how well the curriculum is designed.
2. The Learnable Regularity Assumption. In both the learning and testing phases, there are a few regularities available which completely distinguish objects so that efficient categorization is feasible. As humans who learn our environment, our senses are useful because they help us to identify objects, and our brains apply efficient algorithms to determine relationships between these objects, cause and effect, etc. Another way to phrase this assumption is that patterns can be recognized efficiently.