Feynman’s Rigor
by James Somers, March 2, 2009
All of the things I admire about Richard Feynman -- his intellect, and verve, and eloquence -- seem like special cases of a more general feature of his mind, namely, its almost fanatical devotion to rigor.
Here's an example of what I mean, taken from a wonderful essay, "Richard Feynman and The Connection Machine":
Concentrating on the algorithm for a basic arithmetic operation was typical of Richard's approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, "How is anyone supposed to know that this isn't just a bunch of crap?"
Feynman would only claim to know something if he knew he knew it. And the way he got there, I think, was by never stretching himself too thin -- never moving to step C without first nailing A and B. So when he approached a new problem he would start with concrete examples at the lowest level; when he read, he would "translate" everything into his own words (or better yet, a vivid picture). My guess is that for him, the worst feeling in the world was not being able to explain an idea clearly, because that meant he didn't really own it.
Of course I'd like to think I'm the same way, but I'm not. I skim passages and skip equations. I claim to know more than I do.
But I say that fully confident that most everyone I know is the same way; it turns out that being a "details guy" is harder than it sounds.
* * *
Now the exciting thing, I think, is that you can teach yourself to work a little more rigorously.
One way is to buy a good abstract algebra textbook and work through all of the problems. Algebra's a good place to start because (a) it doesn't require calculus, (b) the problems are intuitive, and (c) it's mostly proof-based. Which means you'll get the full thrill of really knowing things (because you proved them), without having to learn a whole new language (e.g. analysis).
But a better recommendation might be to start hacking. For one thing, you can start building stuff right away; with math it takes a lot longer (7-8 years) to get to the point where you can produce something original.
What's really good about hacking, though, for the purposes of rigorizing, is that you can't make a program work without an honest-to-God understanding of the details. The reason is that the interpreter doesn't actually do much "interpreting"; it does exactly what you say, no more or less. Which means you have to know exactly what you are talking about.
That imperative becomes clearest when something goes wrong, because that's when you really have to look under the hood. Was that index supposed to cut off at the second-to-last item of your list or the third-to-last? What's happening to those instance variables at the end of each loop? Why is that function not getting called? The only way to ensure a long chain of computation ends up the way it's supposed to is to know what's happening at every step, and in that sense, debugging a program teaches you to do explicitly what guys like Feynman seem to do naturally: work hard at level zero, keeping a close eye on every moving part.
While I think you were just trying to make a admittedly correct point, an interpreter is called as such because it interprets from one form of code to another–and yes, it interprets. The reason it does so exactly is because it is doing so correctly.
I would argue very, very strongly that you absolutely can produce something that you don’t understand while hacking. Scaffolding is almost by definition exactly that. I can make a fairly complex web application in Django, Ruby on Rails or CodeIgniter, but there’s no way I know the details that are going on.
I would even argue that a lot of code is produced out of dumb luck–I know that has happened to me on several occasions, and I’ve read more than enough production-level code to know that it’s not uncommon.
Good points. I think my mistake here was in having Project Euler in mind while writing. As you know, PE is a special kind of introductory programming in that the problems there require a single, precise answer. So the inflexibility that makes programming in general a (relatively) good exercise in rigor is really amplified.
When I consider e.g. web programming, I agree with you that you can get away with a lot without deep understanding; but I still find myself tracing all kinds of errors in detail. As someone on the Internet said (in a comment somewhere, I think):
That seems roughly true. And the thing about mistakes is that they force you into the kind of thinking that I imagine Feynman doing, which is buried in the details.
[…] I have a hunch that young readers in school might be encouraged to underline the most abstract stuff, the most general stuff, the "upshot" sentences, instead of the examples those sentences are based on. Is it possible that they're taught to go after the wordsy stuff? And that they get so good at it that they affine for it and away from the concrete things, the things Feynman would care for? […]