Rationality 101

m
10 min readJul 30, 2017

--

Introduction:

[The Introduction first starts off with a quick intro to what rationality is. We go over the difference between epistemic and instrumental rationality.]

This is an introduction to a thing called rationality.

The word “rationality” as I’m using it here isn’t an academic term, like how it’s used in economics to mean something like selfishness, or the story trope term, like Spock from Star Trek, where it means something like being emotionless.

Actually, both of those images give off the wrong connotations.

Rather, it points at this loose collection of ideas that have sprouted in the past decade or so that centers around human minds and how they work.

It’s usually split into two subsections: epistemic rationality and instrumental rationality:

Epistemic rationality, as a field, tries to ask questions about what is true. Arguments, for example, can easily slip into poor reasoning and logical fallacies. Ideas from epistemic rationality try to help ease some of these problems by looking into questions like what it exactly means to “have evidence” for your side or what makes some arguments poor ones.

While epistemic rationality looks at truth, instrumental rationality looks at achieving our goals. Sometimes, for example, our motivation to get the job done can fail. Other times, we may find a lack of time management to hurt our ability to finish our tasks. Instrumental rationality is about finding the best ways to resolve these obstacles to get what we want. Motivation, productivity, and habits all fall into this section.

The need for both types of rationality arises because our reasoning process isn’t perfect. Our thinking can be subject to cognitive biases, psychological quirks of our brain, which can lead our thinking astray. We’ll go over a few of them in the next section.

While the two sections of rationality might initially seem rather separate, linked only by cognitive biases, the distinction between the two is really quite fuzzy. After all, a productivity hack only works if it’s based off things that really are true — if you write “I got 10 hours of work done today!” on a sheet of paper, you can’t actually get 10 hours of work done, even if you think that reality works like that.

You’ll just end up with broken expectations. So we want things that are based in the real world so they really work. And for figuring out what “really works”, you probably need evidence and reasons, which brings us back to epistemic rationality.

The two, then, are intertwined. Ultimately, though, the final arbiter is how useful any of these ideas are, so that’s what I’ll be using as a way of evaluating everything here.

Instrumental Rationality 101:

[This section first gives a deeper explanation of instrumental rationality. It then looks at 3 potential ways our thinking can go wrong and why this means it’s important to care about debiasing.]

So I’ve sort of pointed at this idea of instrumental rationality — motivation, achieving our goals, and other things, but what is it really about?

If I had to summarize it in one sentence, I’d say that instrumental rationality basically bottoms out to being able to make the decisions that get you what you want.

(“Want” is actually a tricky term here, but let’s go with the naïve definition of “something you desire” for now.)

Motivation, for example, can be thought of a drive that pushes us towards our goals. Procrastination is when we put off some of our goals in favor of other actions. Productivity is about figuring out how to increase the amount and quality of work we output.

The common thread here for all three areas is one of improving our decision-making abilities, to be able to choose and act on better options.

For example, imagine a person who has sworn off sweets. He tries to keep to his commitment. Yet, in the moment, facing a candy bar, he decides to eat it anyway, unable to control himself. After eating it, though, he soon regrets having given into temptation.

It feels like he “could have” found a way stick to his commitment so that later on, he would avoid regret. Part of instrumental rationality’s goal is to help with these sorts of situations where, upon closer examination, we see potential ways to improve our behavior.

We want, then, to find ways to take more actions that we reflectively endorse, i.e. actions that we’d still support even if we thought about them or spent some time introspecting on them.

Still, what’s wrong with our naïve decision-making?

Why focus on all these areas of instrumental rationality to try and boost our abilities? Well, as we’ll see both here and later, humans aren’t that great at naïvely achieving our goals. Those pesky cognitive biases I mentioned earlier can lead our thinking astray, causing us to make poorer choices when in the moment.

For a quick crash course, here are three instances of how our thinking can go wrong:

  1. We’re terrible at planning:

Students were asked to estimate when they were 99% certain they’d finish an academic project. When the time came, only 45% of them finished at their 99% estimate.

Perhaps even worse, students in another study were asked to predict times for finishing their senior thesis if ‘‘everything went as poorly as it possibly could.’’ Less than a third of students finished by that time [Buehler, Griffin, & Peetz, 2010].

It’s far from just the students. Our overconfidence in planning has been replicated across all fields, from financial decisions, to software projects, to major government ventures. Just because something feels certain doesn’t mean it’s really so.

2) We’re screwed over by herd mentality:

Participants were placed either alone or in a group intercom discussion (but they were in separate rooms, so they couldn’t see one another). One of the “participants” then had a seizure (they were really working with the experimenter, and the seizure was faked).

When alone, people went to help 85% of the time. But when the participants knew there were four other people in different rooms who had also heard the seizure, only 31% of those groups had people who reported the incident [Darley & Latane 1968].

Alas, there are also many real-life examples of our inability to handle responsibility in a group, often with disastrous results. Being in a group can make it harder to make good decisions.

3) We’re really inconsistent:

People were asked how much they’d pay to save one child. Then, they were asked how much they’d pay to save one child out of two, so it’d be uncertain which one. On the second question, people were willing to pay less, even though they’d be saving one person in both scenarios [Slovic & Västfjäll, 2014].

In another study, people were willing to pay $80 to save 2,000 drowning birds, but a similar group of people came up with basically the same number, $78, (actually a little less!) when given the same question for 20,000 birds — 100 times the initial number [Desvouges, 1992, et al].

Even though we like to consider ourselves “rational creatures”, it’s clear that we’re easily foiled by things like uncertainty or even just big numbers.

Hopefully those three examples gave some intuitions about where our naïve decision-making can go wrong. As you might have guessed, there are many many more cognitive biases I didn’t cover here. The main point is that simply relying on our typical mental faculties means relying on a process that can easily make mistakes.

We’ll later see that things like the overconfidence exhibited in 1) might have been useful in the past for scaring off rival tribes, or that the herd mentality in 2) might have been actually a good indicator of when to help someone out.

But things have changed.

Bluffing doesn’t scare off advancing deadlines. Now we can have friendships with people literally across the globe, past our local “tribes”.

Yet, we’re still stuck with pretty much the same squishy mammal brain as that of our distant ancestors. Mental adaptations that once might have proved helpful on the savanna have become poorly suited for our modern world.

Our brains are nothing more than lumps of wet meat cobbled together from years and years of iteration by evolution. They’re powerful, yes, but they’ve also got a whole bunch of leftover legacy code that isn’t necessarily useful in today’s world.

Thankfully, our brains can also look into themselves. Remember that the brain named itself! Instrumental rationality is about having us examine how our brains work and saying, “Hey, it seems a little weird that our thinking works this way.”

In recent years, many assorted results in different areas have uncovered some strategies that have been shown to improve our decision-making skills.

Armed with concrete strategies for shaping our thinking, along with the knowledge of the different pitfalls, it appears possible to do better than our clunky primate brain defaults. We can try to be less wrong in our thinking.

And that’s rather exciting.

Further Resources:

[Three suggested places to look more into these topics: LessWrong, Thinking Fast and Slow, and the Center for Applied Rationality (CFAR).]

As a community, rationality sort of centers around a focus on discovering truth, improving oneself, and finding good ways to help others.

For the past few years, discussion websites such as LessWrong (heh) have been the place of online discussions about how the research on biases and debiasing can improve our lives. Eliezer Yudkowsky (LessWrong’s cofounder) and others, for example, have written a huge amount of content that summarizes this research.

Some of this effort can be found on a free book called Rationality: From AI to Zombies.

Rationality: From AI to Zombies goes far beyond just heuristics and biases. Part researcher, part philosopher, Yudkowsky explores epistemology, cognitive science, and probability. There are essays on topics from resolving disagreements to artificial intelligence. Thematically, it feels a lot like Douglas Hofstadter’s legendary Gödel, Escher, Bach, although its short essay compilation format makes it a very different type of read.

Reading just a few essays in the collection can help point at some more ideas when it comes to debiasing. (I’d recommend this one or this one for strong, usable ideas.) Yudkowsky’s writings can be complex at times, though, and the insights are not always immediate.

For a much more direct introduction to heuristics and biases, Nobel Prize winner Daniel Kahneman’s bestselling book Thinking Fast and Slow provides an engaging account of the field, and he writes in a familiar, easy-to-read way.

Kahneman is considered the co-founder of the entire field of heuristics and biases; he and Amos Tversky were the psychologists in the 1970’s who kicked off the whole field. Thinking Fast and Slow is probably the best introduction to the collection of years of research on this topic.

It even has the original survey questions included in the studies, so you can see your own responses. Thinking Fast and Slow easily makes it to my own top three list of books I’ve ever read. The book is a major crash course into the study of human error, and there’s many valuable lessons about drawing conclusions and making decisions along the way,

But what about those debiasing strategies I mentioned earlier? One of the best places to learn more is to check out the nonprofit Center for Applied Rationality (CFAR). CFAR combines concepts from economics and cognitive psychology to create research-backed techniques to combat biases.

They then host workshops where they teach these skills to improve thinking and problem-solving. Their website has a wealth of materials on debiasing, with book recommendations, checklists, videos, and more. With a mission to “actually trying to figure things out”, they have a lot of great applied ideas.

If any of this sounds interesting, I’d highly recommend poking around on the above resources. (CFAR also has their own reading list here.)

Of course, I’ll be the first to admit that debiasing doesn’t solve everything.

Still, it’s great to know we can strive to do better.

Once you start reading up on these errors, it becomes easier to catch yourself making them, as well as seeing your past mistakes in hindsight. By recognizing the cues, revamping your planning, and changing your thoughts, debiasing can help us do more of what we really want, instead of our buggy defaults.

References:

Buehler, Roger, Dale Griffin, and Johanna Peetz. “Chapter one-the planning fallacy: cognitive, motivational, and social origins.” Advances in Experimental Social Psychology43 (2010):162.https://www.researchgate.net/publication/251449615_The_Planning_Fallacy

Darley, John M., and Bibb Latane. “Bystander Intervention in Emergencies: Diffusion of Responsibility.” Journal of Personality and Social Psychology 8.4 p1 (1968): 377.http://www.wadsworth.com/psychology_d/templates/student_resources/0155060678_rathus/ps/ps19.html

Desvousges, William H., et al. “Measuring Nonuse Damages using Contingent Valuation: An Experimental Evaluation of Accuracy.” (1992).http://www.rti.org/sites/default/files/resources/bk-0001-1009_web.pdf

Västfjäll, Daniel, Paul Slovic, and Marcus Mayorga. “Whoever Saves One Life Saves the World: Confronting the Challenge of Pseudoinefficacy.” Manuscript submitted for publication (2014). http://globaljustice.uoregon.edu/files/2014/07/Whoever-Saves-One-Life-Saves-the-World-1wda5u6.pdf

--

--