Skip to main content
Humanities LibreTexts

28.3: Recognizing the Relevance of a Cognitive Tool

  • Page ID
    95293
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Cognitive Tools

    The emphasis throughout this book has been on learning about, understanding, and using a diversified kit of cognitive tools. These are concepts and principles and rules that can help us reason more carefully and accurately.

    1. Error Avoidance

    Some of these tools are used primarily for spotting and avoiding faulty reasoning. Many of these come with the labels of fallacy, bias, or error. For example, we learned about the fallacy of affirming the consequent, the strawman fallacy, and the gambler’s fallacy. We also learned about selfserving biases, confirmation bias, and hindsight bias, and we encountered the fundamental attribution error. And we studied strategies for avoiding errors in perception and memory, recognizing regression to the mean, and being more sensitive to base rates. These tools for recognizing problems are important, because the things they help us avoid are sure-fire ways to reason badly.

    2. Positive Tools

    We also learned to use tools for reasoning well. For example, we learned about valid arguments, necessary and sufficient conditions, inductive strength, rules of determining probabilities, evaluating samples, detecting correlations, tracking down causes, assessing risk, and drawing diagrams to make reasoning easier.

    3. Getting Good Information

    We can’t reason without something to reason about, and we also spent some time on tools to help us find, and evaluate, information from various sources. The three sorts of tools shade off into one another. The key point here is that the more skilled you become in the use of these tools, the better your reasoning will be.

    Recognizing when these Tools are Relevant

    The greatest obstacle to reasoning well in the real world does not involve mastering the fine points of concepts and principles and other cognitive tools. The greatest problem is recognizing situations where they apply. When the times comes where we really could benefit from a given tool, we’re often oblivious; it simply doesn’t register that a given tool would help us in the situation we are in. You can learn all the checklists and definitions and rules in the world, but if you can’t recognize situations where they apply, they are useless. Finding a way to overcome this obstacle is the greatest challenge in teaching, and learning, critical reasoning.

    Recognition is the Key

    The general point that recognition is the key will be clearer if we consider several examples.

    Example 1: Sample Size

    Earlier, we encountered the two hospitals in Smudsville (15.2). About 60 babies are born every day in the larger one, and about 15 are born every day in the smaller one. On average, 50% of the births in both hospitals are girls and 50% are boys, but the number varies some from day to day. Which hospital, if either, is more likely to have more days per year when over 70% of the babies born are boys?

    Many people say that the number of 70% boy days should be about the same in each hospital, but in fact it’s more likely in the smaller one. Since about half of all births are boys, and about half are girls, the true percentages in the general population are about half and half. Since a smaller sample will be less likely to reflect these true proportions, the smaller hospital is more likely to have more days per year when over 70% of the births are boys. The births at the smaller hospital constitute a smaller sample.

    We all know that bigger samples are likely to be more representative of their parent populations, but we sometimes fail to realize that we are dealing with a problem that involves samples. We don’t recognize or identify it as a problem involving sample size. We don’t represent it to ourselves as ones where our tools for thinking about samples and populations are relevant. Once we recognize that the problem involves sample size, most of us see what the answer must be.

    Example 2: Probabilities and Conjunctions

    Consider a frequency version of the conjunction fallacy: Are there are more six letter words ending in ‘ing’ than having ‘n’ as their fifth letter? All six letter words ending in ‘ing’ have ‘n’ in fifth place— and ‘i’ in forth and ‘g’ in sixth. So, if we suppose there are more six-letter ‘ing’ words, we commit the conjunction fallacy (16.4).

    But when people get this wrong, they are rarely thinking of it as a conjunction and then making an error in reasoning about probabilities of conjunctions. They simply don’t think of it as a conjunction in the first place. They don’t recognize or code it as a conjunction, and so their tools for thinking about probabilities of conjunction are never applied. When people do see it as a conjunction, especially if they draw a diagram, they tend to get it right.

    Example 3: Anchoring and Adjustment

    We fall victim to an anchoring effect whenever some number or other reference points leads us to skew our reasoning in the direction of the anchor (17.5). We all know that we should ask whether an anchor is a reasonable one or not, but the problem is not that we typically identify something as an anchor and then fail to make sufficient adjustments. The most difficult thing is realizing that something is serving as an anchor at all, much less that we are being influenced by it. We simply don’t recognize or represent the situation as one involving an anchor.

    There are many other examples of our failure to identify situations as ones where specific cognitive tools could make our lives easier. For example, even people who avoid the gambler’s fallacy when thinking about the tosses of a coin may fail to ask whether a similar situation applies when we ask about the probability of Wilbur’s and Wilma’s sixth child being a girl after an initial string of five boys (16.3). And even if we have studied conditionals, we may have trouble with the card selection experiment (27.2) because we don’t realize that it involves conditionals, necessary conditions, and the like.

    Some of the questions we have encountered in the book, e.g., ones about the conjunction fallacy, are a little tricky, and they are often tricky because they make it difficult to represent the problem in the best way, e.g., as one about sample size or about the probability of a conjunction. But the fact that some of the problems seem tricky shouldn’t mislead us into thinking that they are just tricks in textbooks. Every day, all of us fail to recognize or identify situations as ones in which various cognitive tool are applicable.

    Cues That Signal when a Tool Applies

    We could phrase the problem in a sound bite:

    • Cognitive tools to not apply themselves.
    • Situations do not come labeled as ones where a certain tool is applicable.

    Knowing whether a concept or principle applies in a given situation can be very difficult, but there are a few cues that signal the relevance of a particular tool. Here are a few examples:

    Argument Indicators

    One of the questions we face over and over in reasoning tasks is this: do these premises or reasons or assumptions support that conclusion? Such situations involve arguments, but we will never be able to apply the relevant tools for dealing with arguments (e.g., validity, conditional arguments, inductive strength, various fallacies) unless we recognize that a situation involves an argument.

    Fortunately, there are some obvious and easy cues that signal the presence of an argument. Conclusion indicators (e.g., ‘therefore’, ‘so’, ‘hence’) and premise indicators (e.g., ‘because’, ‘since’) tell us that we have probably encountered an argument, suggest which parts are premises and which conclusions, and give us a fighting chance of evaluating it.

    Necessary and Sufficient Conditions

    Any time we encounter talk of requirements or prerequisites, we are probably dealing with necessary conditions. And talk of guarantees, or things that are enough, usually signals sufficient conditions (3.2).

    Probability Indicators

    There are some obvious cues that signal the relevance of our concepts and rules involving probabilities. When words like ‘probable’ or ‘likely’ appear, or when we are dealing with cases everyone knows involve chance (the lottery, flipping a coin), it is usually clear that probabilities could be relevant. Less obvious cues include words like ‘proportion’ and ‘percentage’ and ‘frequency’, since these can often be translated into probabilities. We also are dealing with probabilistic concepts and principles when we ask which of two (or more things) is more likely (is Wilbur more likely to have gone to the show or the bar?). And when we try to decide what to do or predict what will happen, we are often dealing with probabilities. Should I major in business or engineering? Well, if I major in business I’ll probably have an easier time finding a job, but on the other hand I probably won’t like the work as much.

    Remember that we don’t need precise numbers to use many of our probability tools. You reason better even if you only remember that—and why—disjunctions are typically more probable than their disjuncts and conjunctions are typically less probable (often much less probable) than their conjuncts.

    Why? When someone (including you) makes a claim that isn’t obviously true, that can serve as a cue to ask, ‘why?’ What reasons are there to think that it’s true? Are there good reasons to suspect that it’s false? Getting in the habit of asking ‘why?’ when you encounter a claim is one of the simplest steps you can take to improve your reasoning. Not in an impolite way, where you quickly become a pain in the neck, or in the spirit of the child who will continue asking why no matter what answers we give. Ask in a spirit where you would accept a good answer.

    Realistic Tools

    In the real world, we rarely have the time, energy, or inclination to devote a huge amount of thought to the problems we encounter. We have better things to do on a Saturday night than sit home and calculate probabilities, but even if we enjoyed such things, the limitations on human attention, working memory, and computational abilities mean that there are limits to what we can do. So, realistic cognitive tools, ones we are likely to us when we need them, must be ones we can use without a great deal of time and effort.

    We have encountered ways to avoid specific fallacies and cognitive biases. In the next section, we consider a strategy that is simple enough to be realistic, yet general enough to help us in a wide range of situations.


    This page titled 28.3: Recognizing the Relevance of a Cognitive Tool is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jason Southworth & Chris Swoyer via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.