The Way of the Rationalist

Once upon a time, three students of human rationality traveled along a dusty path. The first was a novice, new to the art. The second was a student, who had been practicing for a short time. The third was their teacher.

As they traveled, they happened upon a woman sitting beside a great urn attached to a grand contraption. She hailed the travellers, and when they appeared intrigued, she explained that she was bringing the contraption to town (where she hoped to make money off of it), and offered them a demonstration.

She showed them that she possessed one hundred balls, identical except for their color: one was white, ninety nine were red. She placed them all in the urn, and then showed them how the contraption worked: the contraption consisted of a shaker (which shook the urn violently until none knew which ball was where) and a mechanical arm, which would select a ball from the urn.

"I'll give you each $10 if the white ball is drawn," she said over the roar of the shaker. "Normally, it costs $1 to play, but I'll give you a demonstration for free."

As the shaking slowed, the novice spoke: "I want it to draw the white ball, so I believe that it will draw the white ball. I have faith that the white ball will be drawn, and there's a chance I'm right, so you can't say I'm wrong!"

As the shaking stopped, the student replied, "I am a student of rationality, and I know that it is a virtue to move in tandem with the evidence. In this urn, there are more red balls than white, and so the evidence says that is more likely that a red ball will be drawn than a white ball. Therefore, I believe that a red ball will be drawn."

As the arm began to unfold, the teacher smiled, and said only, "I assign 1% probability to the proposition 'a white ball will be drawn,' and 99% probability to 'a red ball will be drawn.'"

In order to study the art of human rationality, one must make a solemn pact with themselves. They must vow to stop trying to will reality into being a certain way; they must vow to instead listen to reality tell them how it is. They must recognize "faith" as an attempt to disconnect their beliefs from the voice of the evidence; they must vow to protect the ephemeral correspondence between the real world and their map of it.

It is easy for the student, when making this pact with themselves, to mistake it for a different one. Many rationalist think they've taken a vow to always listen to the evidence, and to let the evidence choose what they believe. They think that it is a virtue to weigh the evidence and then believe the most likely hypothesis, no matter what that may be.

But no: that is red-ball-thinking.

The path to rationality is not the path where the evidence chooses the beliefs. The path to rationality is one without beliefs.

On the path to rationality, there are only probabilities.

Our language paints beliefs as qualitative, we speak of beliefs as if they are binary things. You either know something or you don't. You either believe me or you don't. You're either right or you're wrong.

Traditional science, as it's taught in schools, propagates this fallacy. The statistician's role (they say) is to identify two hypotheses, null and alternative, and then test them, and then it is their duty (they say) to believe whichever hypothesis the data supports. A scientist must make their beliefs falsifiable (they say), and if ever enough evidence piles up against them, they must "change their mind" (from one binary belief to another). But so long as a scientist makes their beliefs testable and falsifiable, they have done their duty, and they are licensed to believe whatever else they will. Everybody is entitled to their own opinion, after all — at least, this is the teaching of traditional science.

But this is not the way of the rationalist.

The brain is an information machine, and humanity has figured out a thing or two about how to make accurate information machines. One of the things we've figured out is this: to build an accurate world-model, do away with qualitative beliefs, and use quantitative credences instead.

An ideal rationalist doesn't say "I want the next ball to be white, therefore I believe it will be." An ideal rationalist also doesn't say, "most of the balls are red, so I believe the next ball will be red." The ideal rationalist relinquishes belief, and assigns a probability.

In order to construct an accurate world-model, you must move in tandem with the evidence. You must use the evidence to figure out the likelihood of each hypothesis. But afterwards, you don't just pick the highest-probability thing and believe that. No.

The likelihoods don't tell you what to believe. The likelihoods replace belief. They're it. You say the likelihoods and then you stop, because you're done.

Most people, upon encountering the parable above, think that it is obvious. Almost everybody who hears me tell it in person just nods, but most of them fail to deeply integrate its lesson.

They hear the parable, and then they go on thinking in terms of "knowing" or "not knowing" (instead of thinking in terms of confidence). They nod at the parable, and then go on thinking in terms of "being right" or "being wrong" (instead of thinking about whether or not they were well-calibrated). They know the parable, but in the next conversation, they still insist "you can't prove that!" or "well that doesn't prove me wrong," as if propositions about reality could be "proven," as if perfect certainty was somehow possible.

No statement about the world can be proven. There is no certainty. All we have are probabilities.

Most people, when they encounter evidence that contradicts something they believe, decide that the evidence is not strong enough to switch them from one binary belief to another, and so they fail to change their mind at all. Most people fail to realize that all evidence against a hypothesis lowers its probability, even if only slightly, because most people are still thinking qualitatively.

In fact, most people still think that they get to choose how to draw conclusions from the evidence they've seen. And this is true — but only for those who are comfortable with avoidable inaccuracy.

For this comes as a surprise to many, but humanity has uncovered many of the laws of reasoning.

Given your initial state of knowledge and the observations you have seen, there is only one maximally accurate updated state of knowledge.

Now, you can't achieve this state of perfect posterior state. Building an ideal information-gathering engine is just as impossible as building an ideal heat engine. But the ideal is known. Given what you knew and what you saw, there is only one maximally accurate new state of knowledge.

Contrary to popular belief, you aren't entitled to your own opinion, and you don't get to choose your own beliefs. Not if you want to be accurate. Given what you knew and what you saw, there is only one best posterior state of knowledge. Computing that state is nigh impossible, but the process is well understood. We can't use information perfectly, but we know which path leads towards "better."

If you want to walk that path, if you want to nourish the ephemeral correspondence between your mind and the real world, if you want to learn how to draw an accurate map of this beautiful, twisted, awe-inspiring territory that we live in, then know this:

The Way is quantitative.

To walk the path, you must leave beliefs behind and let the likelihoods guide you. For they are all you'll have.

If this is a path you want to walk, then I now officially recommend starting with Rationality: AI to Zombies Book I: Map and Territory.

As the arm began to unfold, the teacher smiled, and said only, "I assign 1% probability to the proposition 'a white ball will be drawn,' and 99% probability to 'a red ball will be drawn.'"

The woman with the urn cocked her head and said, "Huh, you three are dressed like rationalists, and yet you seem awfully certain that I told the truth about the arm drawing balls from the urn…"

The arm whirred into motion.