Skip to main content
Humanities LibreTexts

28.1: Good Thinking

  • Page ID
    95291
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The world is changing rapidly, and much of what you learn now will be outdated in a few years. Many of your grandparents, and perhaps even your parents, had just one or two jobs during their adult life, but studies show that many people now in their twenties will have a succession of jobs, often in quite different fields, and this will require the acquisition of new skills throughout their lives.

    By contrast, the tools and skills learned for critical reasoning will not go out of date. They are skills you can use to learn how to learn. But if they are going to help you, you must be able to apply the appropriate concepts and principles to new situations. This is easier said than done, and in this chapter, we will focus on ways to make it a little easier.

    What Counts as Good: Who Decides?

    Throughout this book, we have recommended some ways of reasoning as good and criticized others as flawed, often labeling the latter as fallacies, biases, or errors. But who decides what counts as good or bad reasoning? Indeed, nowadays one sometimes hears that the strategies and tools recommended in books like this foster rigid linear thinking, old-fashioned thinking within the box that limits, rather than improves, reasoning.

    Talk like this sounds better in the abstract, though, than it does when we get down to concrete cases (as we will in a moment). The point of reasoning is typically to help us decide what to do, to assist us in our actions, and the worst thing about bad reasoning is that it often leads to actions that no one, not even the person who acts, wanted or intended.

    Everyone makes mistakes in reasoning that are costly, to themselves and to others. You need only glance at the morning headlines or the evening news to hear about peoples’ miscalculations and follies, about reasoning and actions that are utterly self-defeating, even by the agents’ own standards. Indeed, you have probably seen instances of this first hand. Here are three illustrations:

    Bay of Pigs

    On April 17, 1961 John F. Kennedy launched an attempt to overthrow Fidel Castro in Cuba. The operation was a complete failure, and it cast doubt on Kennedy’s ability to govern. The assumptions and planning of Kennedy and his advisors were full of mistakes that led to the so-called Bay of Pigs fiasco (24.6).

    Enron

    In the fall of 2001, the giant company Enron collapsed. Many people, including employees, had invested a great deal of their money in it. The danger signs probably seem clearer in retrospect (due to hindsight bias), but even when things began to look shaky, many people continued to invest. Similar problems occurred, on a smaller scale, with the collapse of many dot.com companies.

    Covid-19

    Florida governor Ron DeSantis was very slow to close public beaches and to issue a shelter-in-place order during the Covid-19 pandemic. Concern for profits in resort communities during Spring Break resulted in thousands of extra people getting sick, and hundreds dying.

    Many of the pitfalls and problems discussed in earlier chapters affect us in ways that matter to us a lot. For example, issues about framing, anchoring, and contrast effects arise in many negotiations: determining how to divide up domestic labor, buying (or selling) a car or a house, trying to get the best salary you can at the new job, or (more depressingly) deciding who gets what, including access to the children, after a divorce.

    Here are just a few of the ways good and bad reasoning have an impact on our lives.

    Framing Effects

    Our reasoning and choices about options often depend on the way the options are framed or described. If an issue is framed in terms of a gain, we are likely to think about it one way; if it is framed as a loss, we are more likely to think about it differently (25.7). If we aren’t on guard, our choices may not be based on the real features of the situation, but on the details, often quite trivial ones, of how the situation is described. Furthermore, since other people often supply the frames, this makes us vulnerable to manipulation.

    We also noted a study (18.1) where physicians recommended different treatments (surgery or radiation therapy), depending on how the same situations (involving cancer) were described. But no one would want to their cancer treatment determined by something as irrelevant as this.

    Conditional Probabilities

    We tend to confuse conditional probabilities with their inverses (i.e., we tend to confuse Pr(A|B) with Pr(B|A), (14.1)). This can lead to many problems. For example, such probabilities are easy to confuse when we interpret medical tests (e.g., a test for HIV), drug tests, and lie detector tests. Indeed, diagnoses and treatment are often based on beliefs about conditional probabilities (e.g., the probability of recovery given treatment A vs. the probability of recovery given treatment B), beliefs that even some physicians get wrong. You don’t want mistakes like this to occur when a doctor is treating your child for a serious disease, or when a prospective employer learns that you failed a drug test.

    Fallacy of Irrelevant Reason

    We commit this fallacy if we base our reasoning or conclusions on irrelevant premises (11.6). Premises that are irrelevant are ones that simply do not bear on the truth or falsity of the conclusion one way or another, so they can never provide good reasons for believing a conclusion. When we commit this fallacy, we end up believing things for no (good) reason at all.

    Cumulative Risk

    Many of the things we do are relatively safe each time we do them. For example, the probability that you will be in an automobile accident on any given outing or have a single condom fail are low. But when we do things over and over, we are dealing with probabilities of disjunctions. These instances mount up, so that over time the risk can become reasonably large (16.5). This is a simple fact about probabilities, and ignoring it leads to bad risk management.

    Availability

    If we base our inductive inferences on samples that come readily to mind— that are available (17.2)—we will often be basing our inductive reasoning on biased samples, and this is likely to lead us to conclusions that are simply false. If we don’t appreciate how large the difference between the probability of having a heart attack and the probability of dying at the hands of a terrorist are (even after September 11), it will be difficult to make rational plans about smoking, diet, and travel. Thousands and thousands of Americans die from heart disease every year, whereas even now very few Americans die at the hands of terrorists.

    Causal Fallacies

    If our actions are to be effective, we need to act in ways that bring about— that cause—the effects we want. If we reason badly about causation (20.6), so that we don’t really know what causes what, many of our predictions will be false, and many of our plans and decisions based on those (faulty) predictions will lead to outcomes that we don’t desire.

    Problems also arise when we confuse necessary and sufficient conditions, fall victim to confirmation bias or illusory correlations, ignore regression to the mean, underutilize base rates, and so on.

    In many of these cases, our own bad reasoning opens the door for others to manipulate us, e.g., by framing things in a way favorable to them (rather than us), by setting anchors that skew our reasoning, by manipulating our actions and attitudes by dissonance reduction mechanisms (19) or by the various tools of professional persuaders (22.6).

    Wanting to avoid being at the mercy of flawed reasoning and manipulation is not just some prejudice in favor of “linear” reasoning. It is an essential part of wanting to be in control of our own life and to base our actions on a realistic assessment of the facts, rather than on irrelevant information or false assumptions.

    How Good, or Bad, are We?

    How good—or bad—is our reasoning? People who study the matter are not in complete agreement, though in many ways we are stunningly good. No one has a clue how to build a computer with a perceptual system or memory remotely as good (at some things) as ours. And even in cases where we have lapses, our judgments and decisions are not hopelessly flawed. We couldn’t make it through the day if they were.

    The spotty picture emerging from several decades of research suggests that we can reason pretty well under some conditions. But this ability is fragile and unstable, and it can be affected, diverted, and even subverted by a variety of influences, as the follies and fiascoes noted above make clear.

    It doesn’t really matter exactly how good—or bad—we are, though. The important points are that we could all do better, and that we would be better off if we did. You don’t need to know whether the rate of a serious disease is 12% or 20% before you try to find ways to cure it. Similarly, we don’t need to know precisely how widespread bad reasoning is to try to improve it.

    No Quick Fixes

    In many cases, the first step to better reasoning is to learn about tempting, but faulty, ways of reasoning, such as ignoring base rates or regression effects. But, unfortunately, there is a great deal of evidence that merely learning about such pitfalls, and being warned against them, is not an effective way to avoid them, especially over the long run.

    It would help us understand how to reason better if we knew more about why we reason badly. We can often help sick people feel better by treating their symptoms, e.g., giving them aspirin to ease the aches and pains. But it is usually more effective if we can identify the underlying cause of an illness (e.g., infection by a certain bacterium) and intervene to change things at the level of root causes (e.g., by killing the bacteria with an antibiotic).

    Similarly, it would probably be easier to improve reasoning by designing interventions that work on the underlying mechanisms that produce suboptimal reasoning, rather than by trying to work on the symptoms directly (e.g., by nagging at people to use base rates).

    We do have some understanding of the various causes of suboptimal reasoning, but we do not yet know nearly as much as we would like. In cases where we do know a little, we can try to change the basic causes of bad reasoning; in cases where we don’t, we can still try to treat the symptoms.

    Unfortunately, there are no magic bullets for either sort of case. It is more accurate to think of the techniques we have studied in much the way we think about exercise and a healthy diet. Neither guarantees that things will always go well— that we will always be healthy or that we will always reason carefully—but with them, things will go well more often than they would with a bad diet, or with faulty reasoning. Moreover, again, as with staying healthy, learning to reason better isn’t an all-or-nothing proposition. It is a life-long process, and all of us have room for improvement.

    In a moment, we will examine some general ways we can improve our reasoning, but first let’s ask how each one of us—individually—might discover how good, or how flawed, our own reasoning is.


    This page titled 28.1: Good Thinking is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Jason Southworth & Chris Swoyer via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.