Here’s a post from mathbabe:

Most people just use stuff they “know to be true,” without having themselves gone through the proof. After all, things like Deligne’s work on Weil Conjectures or Gabber’s recent work on finiteness of etale cohomology for pseudo-excellent schemes are really fucking hard, and it’s much more efficient to take their results and use them than it is to go through all the details personally.

After all, I use a microwave every day without knowing how it works, right?

I’m not sure I know where I got the feeling that this was an ethical issue. Probably it happened without intentional thought, when I was learning what a proof is in math camp, and I’d perhaps state a result and someone would say,

how do you know that?and I’d feel like an asshole unless I could prove it on the spot.Anyway, enough about me and my confused definition of mathematical ethics – what I now realize is that, as mathematics is developed more and more, it will become increasingly difficult for a graduate student to learn enough and then prove an original result without taking things on faith more and more. The amount of mathematical development in the past 50 years is just frighteningly enormous, especially in certain fields, and it’s just crazy to imagine someone learning all this stuff in 2 or 3 years before working on a thesis problem.

What I’m saying, in other words, is that my ethical standards are almost provably unworkable in modern mathematical research. Which is not to say that, over time, a person in a given field shouldn’t eventually work out all the details to all the things they’re relying on, but it can’t be linear like I forced myself to work.

And there’s a risk, too: namely, that as people start getting used to assuming hard things work, fewer mistakes will be discovered. It’s a slippery slope.

I don’t have much comment to make on the substance of the post, which I really liked, but it made me think of a few things.

With basically no formal math training since high school I spent a fair bit of time failing actuarial exams (the harder math ones, anyway) until I finally got out the damn textbooks and learned proofs. There’s a difference between ‘knowing’ something and knowing something, which is a distinction I only thought I understood a few years ago. I’m an advocate of deep understanding.

Thinking in terms of business, a consequence of a relatively efficient economy is that returns can mostly be attributed to luck. This can be interpreted in many ways. Consider YCombinator’s strategy of incubating dozens and dozens of startups by concentrating on the drive and focus of the founders and building a support system to nudge up the probability of success. That’s a strategy designed to maximize exposure to luck and be ruthless about recognizing and pursuing it when it strikes.

Then again, some people just screw something up, make a big financial bet on something they don’t understand, and win.

The point is that, entrepreneurially speaking, deep understanding can paralyze. There aren’t many business ideas that make a lot of sense to domain experts.

Most of us aren’t pursuing that kind of grand goal, though, and there are lots of homes for smart nay-sayers. Most organizations deliberately employ them to keep a lid on new ideas. Because they’re mostly stupid.

I’m thinking humanity will find a way to deal with the rapid expansion of mathematcs and other knowledge by increasing the rate at which humans learn – maybe through the use of genetic tinkering. Perhaps such a solution would cause more problems than it aims to solve.