Skip to main content
Humanities LibreTexts

11.2: Heuristics

  • Page ID
    223921

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Zito appears to have followed a rule like the following: if something is bothering you, throw it off a hill into the river. This actually seems like a pretty good rule, right? If a badger bites onto your leg, throw it in the river. If a pinecone gets caught in your fur, throw it in the river. If a human sets up camp right next to the tree you like to use to scratch your back, throw them in the river. Yeah?

    Of course in this particular case, this general rule doesn’t seem to have worked.

    A general rule like this—one that doesn’t always work but which gets us where we need to go most of the time—is called a heuristic. That’s a fancy word for a rule of thumb, a rough and ready strategy, or a shortcut. There’s no promise that it’s going to work all of the time. It just needs to work well enough to get you through the world in one piece. Rather than take all of the time and energy it takes to make the best decision, we can take a shortcut and use a heuristic to make a pretty good decision.

    Daniel Kahneman—a behavioral economist—called heuristics “machines for jumping to conclusions.” They allow us to quickly and easily make judgments about the world around us. Our minds, it turns out, work so darn efficiently because it is built around a series of heuristics. We use general rules to get by and that lets us make fast, efficient decisions when speed is what will keep us alive. It’s a good way to build a mind because most of what a mind does is try to stay alive.

    It’s not so good a way to build a mind if what you’re interested in is good reasoning, fairmindedness, or intellectual virtue. Heuristics all-too-often get in the way of thinking well.

    The notion that has come to dominate behavioral economics is called “bounded rationality”. Instead of being rational beings that always make optimal choices, we make the best choices we can given the resources we have to work with. We don’t have the time, the energy, the knowledge, the motivation, or the processing power to be able to make perfectly optimal choices all the time. We instead make choices that are good enough. We make choices that suffice to satisfy our needs—this is called “satisficing” (I know, silly word, right? It’s a combo of suffice and satisfy).

    Let’s look at a few Heuristics so we can get an idea of how they work:

    Representativeness

    The representativeness heuristic can be quite useful, but can also be the source of a lot of our most problematic thinking. The basic idea is that when faced with a new situation, we find the nearest prototype(s) in our mind and use what we know about that prototype to help us understand what is going on right in front of us.

    If I see someone walk into a bank with a ski mask, then I look through my memories to see what most closely resembles the current situation before I settle on the prototypical bank robbery. I might even be able to make really good predictions based on this prototype. This person will lock the door behind them, knock out the security guard, shoot into the air, and then yell “everybody get on the floor!” This heuristic has been quite useful. It might even save lives.

    Con artists exploit this heuristic all the time. They know that if they act like a particular prototype you have in your mind, you will associate certain things with them. If an older gentleman acts and dresses like your grandfather, then you might implicitly trust him (depending on your relationship with your grandfather, of course). You might even help him out of the financial bind he’s in by loaning him some money…You get the idea?

    This heuristic may also be behind a lot of our bigotry. We have racist, sexist, and ableist (among others) prototypes that tell us that when we see a person of a certain type, we should expect a certain thing. One can see how this might be problematic.

    Here’s the explanation of the Representativeness heuristic from Jason Southworth and Chris Swoyer’s text “Critical Reasoning: A User’s Manual”. Shared under a CC BY-NC 4.0 license.

    Mike is 6’2”, weighs over 200 lbs., (most of it muscle), lettered in two sports in college, and is highly aggressive. Which is more likely?

    1. Mike is a pro football player.

    2. Mike works in a bank.

    Here, we are given several details about Mike; the profile includes his size, build, record as an athlete, and aggressiveness. We are then asked about the relative frequency of people with this profile that are pro football players, compared to those with the profile who are bankers.

    What was your answer? There are almost certainly more bankers who fit the profile for the simple reason that there are so many more bankers than professional football players. We will return to this matter later in this chapter; the relevant point here is that Mike seems a lot more like our picture of a typical pro football player than like our typical picture of a banker. And this can lead us to conclude that he is more likely to be a pro football player.

    Many of us made just this sort of error with Linda. Linda, you may recall, is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and she participated in antinuclear demonstrations.

    Based on this description, you were asked whether it is more likely that Linda is (i) a bank teller or (ii) a bank teller who is active in the feminist movement. Although the former is more likely, many people commit the conjunction fallacy and conclude that the latter is more probable.

    What could lead to this mistake? Various factors probably play some role, but a major part of the story seems to be this. The description of Linda fits our profile (or stereotype) of someone active in today’s feminist movement. Linda strongly resembles (what we think of as) a typical or representative member of the movement. And because she resembles the typical or representative feminist, we think that she is very likely to be a feminist. Indeed, we may think this is so likely that we commit the conjunction fallacy.

    We use the representativeness heuristic when we conclude that the more like a representative or typical member of a category something is, the more likely it is to be a member of that category. Put in slightly different words, the likelihood that x is an A depends on the degree to which x resembles your typical A. We reason like this: x seems a lot like your typical A; therefore, x probably is an A.

    Sometimes this pattern of inference works, but it can also lead to very bad reasoning. For example, Linda resembles your typical feminist (or at least a stereotype of a typical feminist), so many of us conclude that she is likely to be a feminist. Mike resembles our picture of a pro football player, so many of us conclude that he probably is one. The cases differ because with Linda we go on to make a judgment about the probability of a conjunction, but with both Linda and Mike, we are misusing the representativeness heuristic.

    Overreliance on the representativeness heuristic may be one of the reasons why we are tempted to commit the gambler’s fallacy. You may believe that the outcomes of flips of a given coin are random; the outcomes of later flips aren’t influenced by those of earlier flips. Then you are asked whether sequence HTHHTHTT is more likely than HHHHTTTT. The first sequence may seem much more like our conception of a typical random outcome (one without any clear pattern), and so, we conclude that it is more likely. Here the representative heuristic leads us to judge things that strike us as representative or normal to be more likely than things that seem unusual.

    Specificity Revisited

    We have seen that the more detailed and specific a description of something is, the less likely that thing is to occur. The probability of a quarter’s landing heads is 1/2, the probability of its landing heads with Washington looking north is considerably less. But as a description becomes more specific, the thing described often becomes more concrete and easier to picture, and the added detail can make something seem more like our picture of a typical member of a given group.

    In Linda’s case, we add the claim that she is active in the feminist movement to the simple claim that she is a bank teller. The resulting profile resembles our conception of a typical feminist activist, and this can lead us to assume that she probably is a feminist activist. This may make it seem likely that she is a feminist activist. And this in turn makes it seem more likely that she is a bank teller and a feminist activist than that she is just a bank teller. But the very detail we add makes our claim, the conjunction, less probable than the simple claim that Linda is a bank teller.

    In short, if someone fits our profile (which may be just a crude stereotype) of the average, typical, or representative kidnapper, scrap-booker, or computer nerd, we are likely to weigh this fact more heavily than we should in estimating the probability that they are a kidnapper, scrapbooker, or computer nerd. This is fallacious, because in many cases there will be many people who fit the relevant profile who are not members of the group.

    Anchoring and Adjustment

    Do you think more than 10% or fewer than 10% of North Koreans support a change in their leadership? Now, what do you think the actual percentage is?

    Do you think more than 3 million or fewer than 3 million live in Wyoming? Now, what do you think the actual number is?

    You might have guessed 20% of North Koreans, but you probably didn’t guess 80% or even 60% or maybe even 40%.

    You may have guessed that 1 million or 4 million live in Wyoming, but you probably didn’t guess 300,000 or 30 million.

    10.2.png

    These “anchors” tend to keep us tethered around a particular range of answers even if we might have—without the anchor—guessed a much higher or much lower percentage. It turns out this is fairly robust in empirical studies: people tend to cluster their guesses around the anchors they are given.

    This phenomenon is called “anchoring and adjustment” because we tend to anchor to the first piece of information we have about a new domain (even if it isn’t presented as a fact) and then only “adjust” up or down from there. We don’t tend to, when asked what the actual number is, wipe our minds clean of the anchor and start fresh. We tend, instead, to use the anchor as a clue to what the appropriate range is.

    Consider this next time you go to pick out a new shirt or a paint color or a new significant other. Are you comparing them to the first shirt you saw, the first color that caught your eye, or your ex-partner? If so, you’re probably adjusting from your anchor (your point of reference) rather than thinking in a fresh way about the decision.

    Availability

    “This is a mechanism that takes whatever information is available and makes the best possible story out of the information currently available, and tells you very little about information it doesn’t have. So what you get are people jumping to conclusions. I call this a “machine for jumping to conclusions.” -Daniel Kahneman

    Understanding that we operate according to the availability heuristic is one of the most important lessons we can learn, in my opinion. This underlies so much problematic reasoning that one could teach a whole class on the availability heuristic alone.

    What I can think of, is all there is. What occurs to me in the moment is all I need to think of in order to make good judgments. What I can recall is much more likely than what I can’t. At least that’s what the availability heuristic would have you believe.

    Let’s say you’re booking a flight (thanks Kendra Cherry for the example), and you remember a number of recent airline accidents, terrorist attacks, and disasters. Suddenly you are thinking that taking a train might be a better option.

    Or maybe you’re thinking about the representation of gay and lesbian couples on television, and you can think of a lot, so it seems like over half of the couples on TV these days are gay or lesbian (I once heard Glenn Beck make a similar claim on his radio show, of course not aware that he was in the grips of the availability heuristic).

    Maybe you work in a hospital or police station or somewhere similar where the people you interact with are often up to no good—seeking pain pills to feed their addiction or being arrested quite often. Maybe the fact that you can remember so many of these people begins to flavor your idea of what it means to be poor or homeless or not neurotypical. You don’t interact with the homeless folks who aren’t being arrested, so you don’t see them and consequently do not include them in your calculations of how many people who are homeless have been arrested before, or how many ER patients are seeking prescription drugs.

    10.3.jpg

    The availability heuristic runs on the quickness and vividness of recall. When I think of the US Congress, I think of Kamala Harris, Kirsten Gillibrand, Nancy Pelosi, Joe Biden, Mitch McConnell, Paul Ryan, and Claire McCaskill. Not as many men as women. In fact, when I do think of a man—particularly a white man—in congress, I might think of them as “another white male congressman” and not notice them as much because white men still make up the vast majority of members of congress.[1] So when I start to think of whether we have a representation problem in congress, I might not be so quick to think “yes” because I think of so many women right off the bat when I think of members of congress. The quickness of recall of the women members makes it seem to me like there are more women in congress than in fact there are.

    The more available some set of examples are to your mind, that is, the more prevalent you think that phenomenon is. You see a car wreck and start to think it’s quite common. You see Jaws and the vividness of your memories of the attacks makes shark attacks seem overwhelmingly likely.

    You might see something on the news all the time—child abductions for instance—and then grossly overestimate the likelihood of your children getting abducted. After all, the countless children who never get abducted or even live in the same town as a child who has been abducted don’t register to you—they don’t seem important, so you don’t log them in your memory banks.

    As with the other heuristics, this can sometimes make our mental lives much easier, but it can also really get in the way of good reasoning about the world.

    Availability and Online Algorithm Bubbles

    One can imagine how problematic the Availability heuristic can be in a world where the information you have access to is determined by algorithms which are designed to ensure that you only see things that you want to see. Companies like Facebook, Google, Twitter, and the like all run on software that puts in front of you things that you are more likely to click—to like, to follow, to comment, to share, to engage with in some way. This software also responds to explicit instructions you send it telling it that you don’t want to see posts from this person, or you don’t want to see paid advertisements and other sponsored content from organizations and companies like this, or you don’t like this particular post, etc. The result is what people call a “bubble,” which is a curated and selective set of inputs that you see, sort of like your own mini reality—a different version of reality than the one your wacky aunt or uncle sees when they log on. So step one is to recognize that the internet often works this way: sites show you things that you want to see and tend not to show you things you don’t want to see (where “want to see” is shorthand for “will likely engage with it and/or no report it to the algorithms for filtering out of your feed). You might not like everything you see, but you are being shown a curated and personalized version of online reality rather than an impersonal and universal online world.

    What does this have to do with availability? Simple: If availability is the heuristic that says “what I see is all there is”—it’s a process wherein the mind generalizes based on what is available rather than on what is likely objectively true—and if “what I see” is selective, then the availability heuristic will generalize based on selective data.

    If I think a certain kind of person is a certain kind of way, and then I log on and see examples of that all over my feed or search results, and then I react accordingly thus bolstering the algorithms that gave me those examples in the first place; Then the availability heuristic is likely to make me generalize based on those examples rather than based on my more objective assessment of how prevalent that problem actually is.

    It’s almost like the availability heuristic and online bubbles were tailor-made for each other. They weren’t, but they sure get up to a lot of trouble when they get together. The availability heuristic is a bit dangerous—use beware!

    Upshots

    We could spend all day learning about different mental Heuristics, but instead it will serve us to draw out some implications of what we’ve learned. If it’s true (and it seems that it is) that we employ short cuts or heuristics to get around, make judgments, and choose between alternatives, then we’re made in a way that’s more efficient than it is rational. The availability heuristic actually gives us the right answer sometimes, but so much of the time it leads us astray, which might cause us to question whether it’s a good cognitive strategy at all. It is, however, much more efficient than spending the time, energy, and mental resources to remember absolutely every instance of a phenomenon. If we want to know if sharks are dangerous, it is far more efficient to remember Jaws, and then not go swimming. This is pretty bad reasoning, though, since shark attacks are extremely uncommon.

    One upshot is that we shouldn’t “trust our guts” since we now know that our “guts” are often subject to some powerful influences that can lead our gut instincts to judge incorrectly more often than not. We should be skeptical of our own intuitions and convictions, because we never know exactly where they come from.

    Another upshot is that these heuristics may result in dangerous thoughts that are worth thinking carefully about. If I only ever see Arabs, Pakistanis, and other ethnic groups playing terrorists on TV, then I may come to have a particular bias towards being wary of people from these ethnic groups. If I only ever see stereotypical African-Americans on TV playing into a narrative about what it means to be African-American, then I may come to believe that this narrative and that stereotype are accurate descriptions of reality.

    After all, every ____ I can think of is _____, so it seems reasonable to me to think that all _____s are _____. This is the structure that Availability inferences seem to take. Think up some examples and then form a generalization or tell a story that makes sense of this really small set of examples you’ve been able to recall in the moment. The problem is that we’re often only able to recall exceptional examples—exceptions to the true generalizations about the kind of thing we’re thinking about.

    For more information on Heuristics: OpenPsyc

    For more information on biases, pitfalls, and traps, see: More Biases, Pitfalls, and Traps, by Southworth and Swoyer .


    [1] https://www.washingtonpost.com/news/the-fix/wp/2015/01/05/the-new-congress-is-80-percent-white-80-percent-male-and-92-percent-christian/?noredirect=on&utm_term=.272d8d98fab8


    This page titled 11.2: Heuristics is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform.

    • Was this article helpful?