Skip to main content
Humanities LibreTexts

4.3: Some Intellectual Vices

  • Page ID
    223851

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    A helpful way to practice these intellectual virtues is to see ways that we fail to practice them, so that we can learn from those and avoid them. Learning from how we think badly is a great way to learn how to think well. “Vice” is the traditional name for lacking in virtue. Typically vices are themselves character traits or dispositions, defects in who we are that cause us to act poorly. We’re going to focus on vices that have traditionally been called “fallacies,” a term that is less helpful because it suggests that such actions are always wrong. This is not true. With all virtues and vices, context matters for whether an action expresses virtue or vice in a particular situation. So in general, we prefer the term “vice” rather than “fallacy”. However, since some of the content that follows is excerpted from more traditional textbooks, you will see the language of fallacies used, but know that we’re always talking about intellectual vices, ways we fall short of being intellectually virtuous.

    Most of the following vices are not themselves character traits, but expressions of character traits. Using the term “vice” more expansively is not contrary to virtues and vices being fundamentally character traits. Philosopher Quassim Cassam notes that a vice is anything in us that is “likely to impede effective and responsible inquiry” (165),1 and this includes both bad character traits like dishonesty or arrogance, and their expression in behaviors like wishful thinking or stubbornly ignoring evidence contrary to one’s beliefs. The vices below are all ways of not thinking well because of failures in our intellectual character.

    Since we noted that these vices are also labeled fallacies, it’s important to realize that these are not the same kinds of fallacies we will encounter later in Chapter 7 when we discuss “logical fallacies.” Logical fallacies describe rules of logic in which an inference always leads to a conclusion that does not follow from the premises. Logical fallacies are always bad. The intellectual vices are not logical fallacies. They have to do with how we behave badly in our thinking or in our conversations with other people. This is another reason why labeling both types of mistakes “fallacies” is unhelpful, because it suggests they’re both the same kinds of things. Some textbooks call the logical fallacies “formal fallacies” and the vices in this chapter “informal fallacies,” to show that the first kind are mistakes of logical form and the second kind are not. We think it is most helpful to simply call them something else: intellectual vices.

    It’s worth noting that the presence of intellectual vices means we haven’t gone about reasoning with virtue. It does not mean that our conclusions, the things we believe, are in fact wrong. Vice describes our justification for what we believe, not the truth of what we believe. Even a broken clock is right twice a day, and thinking in an intellectually vicious manner doesn’t mean our conclusions are false. What it does mean is we have not thought well enough to be justified in believing our conclusions. Make sure you don’t use the vices to ignore the possible truth of a claim. The vices are reasons to reject a particular argument, but you should always ask yourself, “What would this argument look like if it was virtuously argued?”

    Okay, how will we progress from here? There are two sorts of vices we’ll discuss in this chapter: Vices of Relevance and Vices of Presumption. We’ll go through each vice and offer some examples. Most of the rest of this chapter is pulled from Van Cleave and Knachel. There are also fallacies in Chapter 2 and many more in Chapter 8.

    Vices of Relevance

    Vices of relevance are ways of making arguments or critiquing arguments that have no relevance to the arguments themselves. When we are not intellectually honest or humble, or when we lack curiosity and charity, then we tend to be more focused on winning arguments or proving our friends wrong than seeking which conclusions actually have the strongest justification. That tempts us to make arguments that are psychologically or socially successful, but not actually good arguments (because they depend on irrelevant details). Keep in mind that not every topic shift in an argument is a vice of relevance, because sometimes the new topic is relevant to the argument. We’ll see an excellent example of this in our first vice of relevance.

    Ad Hominem Attack/Argument Against the Person and Genetic Fallacy

    The vice of an Ad Hominem Attack occurs when someone unfairly attacks the character and motives of the arguer instead of their argument. Recall that as charitable thinkers, we are trying to separate the arguer from their argument and address the latter on its own merits. If even the most loathsome person makes a good argument, that argument remains valid or strong regardless of the failings of the person making it. But not every Ad Hominem is a vice; sometimes people put forward bad arguments because of a lack of virtue. For example, if a person makes an argument, not for the sake of truth but to prove that they are smarter than others, then it is an appropriate response to avoid the argument and instead criticize their lack of honesty or curiosity. Notice that this does not render their argument bad, it simply avoids addressing the merits of the argument until the arguer is prepared to debate those merits for the right reasons.

    Because intellectual virtue is an important part of thinking well, an ad hominem critique is appropriate when someone (including ourselves!) is not thinking virtuously. As Qassim Cassam writes, “The evaluation of the justificational status of a particular belief is closely related to the evaluation of the believer” (2016: 175). If we think someone has made a good argument, we’re saying they are thinking well, and this means they are thinking virtuously. Cassam elaborates, “A justified belief is characteristically one which arises through the exercise of intellectual virtue. In evaluating a belief as justified we are in effect commending the believer” (2016: 176).

    Okay, so when is an ad hominem attack a vice? If we think about Cassam’s proposal above, then ad hominem is a vice whenever we attacks someone’s character (instead of their argument) for reasons other than their lack of intellectual virtue. If I say of someone, “I don’t think your argument is honest about the reasons against your position,” I’m fairly criticizing a lack of virtue. But if I say of someone, “I don’t think your argument is good because you’re a loan shark,” I have exemplified a vice; I have made an argument without virtue.

    There are three main types of ad hominem attack:

    1. Abusive: you simply attack the character or rationality of your opponent (or a group to which the opponent belongs like "liberals" or "pro-lifers")

    2. Circumstantial: as above in our cartoon, you point to circumstances which make your opponent untrustworthy or suspect.

    3. Tu Quoque: Latin meaning "you too!" You point to similar faults in your opponent when your actions or character is called into question. More generally: you point to a fault elsewhere to draw attention away from the fault being discussed.

    From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    “Ad hominem” is a Latin phrase that can be translated into English as the phrase, “against the man.” In an ad hominem fallacy, instead of responding to (or attacking) the argument a person has made, one attacks the person him or herself. In short, one attacks the person making the argument rather than the argument itself. Here is an anecdote that reveals an ad hominem fallacy (and that has actually occurred in my ethics class before).

    A philosopher named Peter Singer had made an argument that it is morally wrong to spend money on luxuries for oneself rather than give all of your money that you don’t strictly need away to charity. The argument is actually an argument from analogy (whose details I discussed in section 3.3), but the essence of the argument is that there are every day in this world children who die preventable deaths, and there are charities who could save the lives of these children if they are funded by individuals from wealthy countries like our own. Since there are things that we all regularly buy that we don’t need (e.g., Starbuck’s lattes, beer, movie tickets, or extra clothes or shoes we don’t really need), if we continue to purchase those things rather than using that money to save the lives of children, then we are essentially contributing to the deaths of those children if we choose to continue to live our lifestyle of buying things we don’t need, rather than donating the money to a charity that will save lives of children in need. In response to Singer’s argument, one student in the class asked: “Does Peter Singer give his money to charity? Does he do what he says we are all morally required to do?”

    The implication of this student’s question (which I confirmed by following up with her) was that if Peter Singer himself doesn’t donate all his extra money to charities, then his argument isn’t any good and can be dismissed. But that would be to commit an ad hominem fallacy. Instead of responding to the argument that Singer had made, this student attacked Singer himself. That is, they wanted to know how Singer lived and whether he was a hypocrite or not. Was he the kind of person who would tell us all that we had to live a certain way but fail to live that way himself? But all of this is irrelevant to assessing Singer’s argument. Suppose that Singer didn’t donate his excess money to charity and instead spent it on luxurious things for himself. Still, the argument that Singer has given can be assessed on its own merits. Even if it were true that Peter Singer was a total hypocrite, his argument may nevertheless be rationally compelling. And it is the quality of the argument that we are interested in, not Peter Singer’s personal life and whether or not he is hypocritical. Whether Singer is or isn’t a hypocrite, is irrelevant to whether the argument he has put forward is strong or weak, valid or invalid. The argument stands on its own and it is that argument rather than Peter Singer himself that we need to assess.

    Nonetheless, there is something psychologically compelling about the question: Does Peter Singer practice what he preaches? I think what makes this question seem compelling is that humans are very interested in finding “cheaters” or hypocrites—those who say one thing and then do another. Evolutionarily, our concern with cheaters makes sense because cheaters can’t be trusted and it is essential for us (as a group) to be able to pick out those who can’t be trusted. That said, whether or not a person giving an argument is a hypocrite is irrelevant to whether that person’s argument is good or bad. So there may be psychological reasons why humans are prone to find certain kinds of ad hominem fallacies psychologically compelling, even though ad hominem fallacies are not rationally compelling.

    Not every instance in which someone attacks a person’s character is an ad hominem fallacy. Suppose a witness is on the stand testifying against a defendant in a court of law. When the witness is cross examined by the defense lawyer, the defense lawyer tries to go for the witness’s credibility, perhaps by digging up things about the witness’s past. For example, the defense lawyer may find out that the witness cheated on her taxes five years ago or that the witness failed to pay her parking tickets. The reason this isn’t an ad hominem fallacy is that in this case the lawyer is trying to establish whether what the witness is saying is true or false and in order to determine that we have to know whether the witness is trustworthy. These facts about the witness’s past may be relevant to determining whether we can trust the witness’s word. In this case, the witness is making claims that are either true or false rather than giving an argument. In contrast, when we are assessing someone’s argument, the argument stands on its own in a way the witness’s testimony doesn’t. In assessing an argument, we want to know whether the argument is strong or weak and we can evaluate the argument using the logical techniques surveyed in this text. In contrast, when a witness is giving testimony, they aren’t trying to argue anything. Rather, they are simply making a claim about what did or didn’t happen. So although it may seem that a lawyer is committing an ad hominem fallacy in bringing up things about the witness’s past, these things are actually relevant to establishing the witness’s credibility. In contrast, when considering an argument that has been given, we don’t have to establish the arguer’s credibility because we can assess the argument they have given on its own merits. The arguer’s personal life is irrelevant.

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{1}\): “Look,” says the bird, “if we start walking around on the ground all the time, the cat will get us, I know I have a personal stake in this because I’d prefer not to be eaten, but my argument would stand even if I myself were a cat!” (Image Credit: Otto Speckter in Picture Fables)

    Tu Quoque

    Tu Quoque is a version of the Ad Hominem fallacy. Here’s Van Cleave again.

    “Tu quoque” is a Latin phrase that can be translated into English as “you too” or “you, also.” The tu quoque fallacy is a way of avoiding answering a criticism by bringing up a criticism of your opponent rather than answer the criticism. For example, suppose that two political candidates, A and B, are discussing their policies and A brings up a criticism of B’s policy. In response, B brings up her own criticism of A’s policy rather than respond to A’s criticism of her policy. B has here committed the tu quoque fallacy. The fallacy is best understood as a way of avoiding having to answer a tough criticism that one may not have a good answer to. This kind of thing happens all the time in political discourse.

    Tu quoque, as I have presented it, is fallacious when the criticism one raises is simply in order to avoid having to answer a difficult objection to one’s argument or view. However, there are circumstances in which a tu quoque kind of response is not fallacious. If the criticism that A brings toward B is a criticism that equally applies not only to A’s position but to any position, then B is right to point this fact out. For example, suppose that A criticizes B for taking money from special interest groups. In this case, B would be totally right (and there would be no tu quoque fallacy committed) to respond that not only does A take money from special interest groups, but every political candidate running for office does. That is just a fact of life in American politics today. So A really has no criticism at all to B since everyone does what B is doing and it is in many ways unavoidable. Thus, B could (and should) respond with a “you too” rebuttal and in this case that rebuttal is not a tu quoque fallacy.

    Attacking causes for belief rather than reasons for belief (Genetic Fallacy)

    The vice of attacking the causes for belief, sometimes called the Genetic Fallacy, requires learning the difference between causes and reasons. Perhaps I trust my physician because my best friend goes to the same physician. The ‘because’ here is an explanation for why I trust them. But if you were to ask me why I trust my physician, I might say, “Because she is the most highly-rated general practitioner in my area.” Now I have given you a reason for trusting her. Both can be true descriptions of my trust: the cause of my trust is that my best friend trusts her, and the reason I think my trust is justified is that she is so highly rated.

    When criticizing an argument, we want to criticize the reasons for belief, not the causes. The genetic fallacy occurs when, for example, instead of looking at your beliefs as they stand on their own, I look at the role those beliefs play in your psychology or the psychological origins of those beliefs. I might say that you only believe in the free market because your father believes in the free market. That’s not an attack against the belief itself. At best it amounts to the claim that you don’t have any justification for believing it, only an explanation for how you came to believe it.

    That’d be like critiquing a particular golf club because a bad brand name manufactured it. It’s still a perfectly good golf club no matter who made it. We should critique the golf club on the basis of its usefulness as a golf club, not on the basis of where it was made. Note that it might be reasonable not to trust a bad brand when making a purchase, but if the reviews come in and it’s a fine golf club, then its origin is irrelevant.

    From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    The genetic fallacy occurs when one argues (or, more commonly, implies) that the origin of something (e.g., a theory, idea, policy, etc.) is a reason for rejecting (or accepting) it. For example, suppose that Jack is arguing that we should allow physician assisted suicide and Jill responds that that idea first was used in Nazi Germany. Jill has just committed a genetic fallacy because she is implying that because the idea is associated with Nazi Germany, there must be something wrong with the idea itself. What she should have done instead is explain what, exactly, is wrong with the idea rather than simply assuming that there must be something wrong with it since it has a negative origin. The origin of an idea has nothing inherently to do with its truth or plausibility. Suppose that Hitler constructed a mathematical proof in his early adulthood (he didn’t, but just suppose). The validity of that mathematical proof stands on its own; the fact that Hitler was a horrible person has nothing to do with whether the proof is good. Likewise with any other idea: ideas must be assessed on their own merits and the origin of an idea is neither a merit nor demerit of the idea.

    Although genetic fallacies are most often committed when one associates an idea with a negative origin, it can also go the other way: one can imply that because the idea has a positive origin, the idea must be true or more plausible. For example, suppose that Jill argues that the Golden Rule is a good way to live one’s life because the Golden Rule originated with Jesus in the Sermon on the Mount (it didn’t, actually, even though Jesus does state a version of the Golden Rule). Jill has committed the genetic fallacy in assuming that the (presumed) fact that Jesus is the origin of the Golden Rule has anything to do with whether the Golden Rule is a good idea.

    I’ll end with an example from William James’s seminal work, The Varieties of Religious Experience. In that book (originally a set of lectures), James considers the idea that if religious experiences could be explained in terms of neurological causes, then the legitimacy of the religious experience is undermined. James, being a materialist who thinks that all mental states are physical states—ultimately a matter of complex brain chemistry, says that the fact that any religious experience has a physical cause does not undermine that veracity of that experience. Although he doesn’t use the term explicitly, James claims that the claim that the physical origin of some experience undermines the veracity of that experience is a genetic fallacy. Origin is irrelevant for assessing the veracity of an experience, James thinks. In fact, he thinks that religious dogmatists who take the origin of the Bible to be the word of God are making exactly the same mistake as those who think that a physical explanation of a religious experience would undermine its veracity. We must assess ideas for their merits, James thinks, not their origins.

    How do intellectually virtuous thinkers avoid making ad hominem attacks when they’re inappropriate? Well, if we’re intellectually honest, we will emphasize substance over motives. We will be slow to question someone’s motives behind an argument, and instead start by charitably focusing on the substance of what they have to say. Of course, “slow” does not mean never, and sometimes a person’s behavior and manner of argument will convince us that bad motives are a factor, but we should not start from a place of assuming ill-intent.

    In general, we should be slow to cast aspersions on another person’s character or intelligence. Just because we think they have made a bad argument does not mean we should attribute this to a lack of ability or integrity on their part. Some people (ourselves most of all!) simply make mistakes. By focusing on their argument, we continue to treat them as an equal dialogue partner, someone whose views are worthy of our curiosity and our charity. This often makes the dialogue proceed better and with more insight. Again, “slow” does not mean never, and sometimes a person who is behaving belligerently needs to be told that their conduct makes them unfit for continued dialogue. But if we resort to such “last measures,” it should always be in the hope of helping a person become more intellectually virtuous so that they can rejoin the conversation, and certainly not with the secret motive of getting them to agree with us or winning the debate.

    Mansplaining

    Sometimes we are so confident we’re right, we begin to explain why we’re right in a manner and tone that is aggressive, domineering, and keeps the other person from contributing. This has become known as Mansplaining, a term coined because many women have experienced their ideas ignored or discredited by men who speak at them in a condescending manner. But mansplaining can be practiced by persons of any gender towards persons of any other gender. It often involves telling the other person how they feel or should feel, what they believe, and why their perspective doesn’t matter.

    Consider Hanuni. Hanuni shares with a friend her anxieties concerning the Russian invasion of Ukraine. She tells her friend, “I have a hard time focusing on my daily responsibilities because I feel overwhelmed at the thought of Ukrainians right now fighting and dying just to be free enough to carry out their daily responsibilities.” Her friend, Hiari, replies to her, “Come’on, that’s not how you feel. You don’t know what it’s like to be a Ukrainian, and you’ve never been to war, so you really have no business assuming you know what they’re going through. You really should be counting your own blessings rather than worrying about things that are not your problem.” Think about how Hiari’s response shuts Hanuni down and makes her feel that she’s wrong to care about the situation in Ukraine or to empathize with people in other situations. Most significantly, Hiari’s comment erases Hanuni’s voice and contribution.

    A somewhat high profile instance of mansplaining occurred during the 2017 U. S. Senate debate on the confirmation of Senator Jeff Sessions (Alabama) to the office of the Attorney General of the United States. The confirmation process was contentious in part because of concerns about Senator Sessions record on civil rights. To speak to this issue, fellow Senator Elizabeth Warren (Massachusetts) reminded the Senate of former Senator Ted Kennedy’s (also of Massachusetts) objections back in 1986 to Sessions being appointed to a judgeship because of concerns over suppression of black votes in his area of authority. She then proceeded to read a letter on the Senate floor authored by Coretta Scott King, the widow of civil rights leader Martin Luther King, Jr., written to the Senate Judiciary Committee in 1986 opposing Sessions’ confirmation to a judgeship.

    As Senator Warren was reading from King’s letter, the presiding Senate Chair Steve Daines interrupted her twice to remind her that Senate rules prohibit casting aspersions on other Senators. After some back and forth, he permitted her to continue reading King’s letter. However, shortly after resuming her reading, then Senate Majority Leader Mitch McConnell interrupted her, insisting that she was slandering Senator Sessions character from the floor. He called for a vote on whether she would be allowed to continue her speech, and the Senate voted to terminate her speaking time. Later in the Senate debate, another male Senator read the letter by King without objection.

    Shortly after Elizabeth Warren was told to sit down, Majority Leader McConnell explained the events in the following manner, “Here is what transpired. Senator Warren was giving a lengthy speech. She had appeared to violate the rule. She was warned. She was given an explanation. Nevertheless, she persisted.” McConnell’s interruptions of Warren’s speech, and his domineering chastisement lecturing her on why she was not permitted to continue, was more focused on explaining at her why her voice wasn’t going to be included than dialoguing with her on what she had to say.

    Mansplaining is never good, but it’s important that we do not label everyone who criticizes what we believe as engaging in mansplaining. Intellectually virtuous people are teachable and allow others to help them see their mistakes. Sometimes people will dismiss the arguments of others as mansplaining when in fact they’re only voicing disagreement. Dismissing a reasonable counter-argument as mansplaining is in fact a type of mansplaining—another way to shut down someone’s voice. So it’s important to correctly identify cases of mansplaining and not use the concept as a means to avoid having to listen to anyone who challenges our thinking.

    Straw Argument

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{2}\): Do you want to build a snowman? And then critique his position on global warming? (Image Credit: Otto Speckter in Picture Fables)

    The vice of constructing a straw argument happens when someone (willfully or mistakenly) misinterprets someone else's argument or position. We also might call it creating a Straw Argument, Straw Figure, Straw Person, or Straw Man.

    The opponent's argument or position is characterized uncharitably so as to make it seem ridiculous or indefensible. It is a failure of charity because the person is attacking an irrelevant argument, rather than the argument they actually gave. Imagine someone building a straw doll and fighting that instead of their actual opponent. No one would think they had won the fight.

    From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    Suppose that my opponent has argued for a position, call it position A, and in response to his argument, I give a rationally compelling argument against position B, which is related to position A, but is much less plausible (and thus much easier to refute). What I have just done is attacked a straw man—a position that “looks like” the target position, but is actually not that position. When one attacks a straw man, one commits the straw man fallacy. The straw man fallacy misrepresents one’s opponent’s argument and is thus a kind of irrelevance. Here is an example.

    Two candidates for political office in Colorado, Tom and Fred, are having an exchange in a debate in which Tom has laid out his plan for putting more money into health care and education and Fred has laid out his plan which includes earmarking more state money for building more prisons which will create more jobs and, thus, strengthen Colorado’s economy. Fred responds to Tom’s argument that we need to increase funding to health care and education as follows: “I am surprised, Tom, that you are willing to put our state’s economic future at risk by sinking money into these programs that do not help to create jobs. You see, folks, Tom’s plan will risk sending our economy into a tailspin, risking harm to thousands of Coloradans. On the other hand, my plan supports a healthy and strong Colorado and would never bet our state’s economic security on idealistic notions that simply don’t work when the rubber meets the road.”

    Fred has committed the straw man fallacy. Just because Tom wants to increase funding to health care and education does not mean he does not want to help the economy. Furthermore, increasing funding to health care and education does not entail that fewer jobs will be created.

    Fred has attacked a position that is not the position that Tom holds, but is in fact a much less plausible, easier to refute position. However, it would be silly for any political candidate to run on a platform that included “harming the economy.” Presumably no political candidate would run on such a platform. Nonetheless, this exact kind of straw man is ubiquitous in political discourse in our country.

    Here is another example.

    Example \(\PageIndex{1}\)

    Nancy has just argued that we should provide middle schoolers with sex education classes, including how to use contraceptives so that they can practice safe sex should they end up in the situation where they are having sex. Fran responds: “proponents of sex education try to encourage our children to a sex-with-no-strings-attached mentality, which is harmful to our children and to our society.”

    Fran has committed the straw man (or straw woman) fallacy by misrepresenting Nancy’s position. Nancy’s position is not that we should encourage children to have sex, but that we should make sure that they are fully informed about sex so that if they do have sex, they go into it at least a little less blindly and are able to make better decision regarding sex.

    As with other fallacies of relevance, straw man fallacies can be compelling on some level, even though they are irrelevant. It may be that part of the reason we are taken in by straw man fallacies is that humans are prone to “demonize” the “other”—including those who hold a moral or political position different from our own. It is easy to think bad things about those with whom we do not regularly interact. And it is easy to forget that people who are different than us are still people just like us in all the important respects. Many years ago, atheists were commonly thought of as highly immoral people and stories about the horrible things that atheists did in secret circulated widely. People believed that these strange “others” were capable of the most horrible savagery. After all, they may have reasoned, if you don’t believe there is a God holding us accountable, why be moral? The Jewish philosopher, Baruch Spinoza, was an atheist who lived in the Netherlands in the 17th century. He was accused of all sorts of things that were commonly believed about atheists. But he was in fact as upstanding and moral as any person you could imagine. The people who knew Spinoza knew better, but how could so many people be so wrong about Spinoza? I suspect that part of the reason is that since at that time there were very few atheists (or at least very few people actually admitted to it), very few people ever knowingly encountered an atheist. Because of this, the stories about atheists could proliferate without being put in check by the facts. I suspect the same kind of phenomenon explains why certain kinds of straw man fallacies proliferate. If you are a conservative and mostly only interact with other conservatives, you might be prone to holding lots of false beliefs about liberals. And so maybe you are less prone to notice straw man fallacies targeted at liberals because the false beliefs you hold about them incline you to see the straw man fallacies as true.

    Thinking with virtue means that when others explicitly deny a view, we should be slow to attribute this view to them. This does not mean we never do so; again, if someone is acting in bad faith and we think they are pretending to hold a view different than the one they assert, we might need to make clear their hidden agenda. But notice this is no longer a straw argument, if we’re right in our suspicion. Nonetheless, we start from a place of being slow to do this, wanting to take people at face value first before assuming they don’t believe what they are claiming.

    A related practice in virtue is to be slow to attribute to others views that are clearly false, implausible, or lie at the extremes of human belief. Again, sometimes we have to do this because there are people who believe false, implausible, or extremist views. But we start from a place of charitably assuming rationality and truth in people, being slow to change our assumption.

    A really useful way to assist with this is to summarize the other person’s views and arguments back to them before making a critique. If we stop ourselves and explain to someone else what we think they are arguing, it (a) gives them an opportunity to clarify first before we make objections, and (b) it shows them we are acting in good faith and that they can trust us to not construct straw arguments out of what they said.

    Red Herring

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{3}\): Even the goodest boiz get distracted easily. SQUIRREL! (Image Credit: Otto Speckter in Picture Fables)

    A herring is a pungent fish, especially in the days before refrigeration. William Cobbett claimed to have used this as a boy to lure unsuspecting hounds and their unsuspecting hunters away from their intended prey. Cobbett wanted the rabbit for himself, so he drug a herring on the ground to make a stench trail, drawing the hound away from the rabbit’s hole.

    Interesting trick! But what does this have to do with reasoning well? Simple: one way that people reason improperly is by not staying on topic. If you start talking about one thing, but end up talking about another thing, chances are either you or your conversant have committed the vice of a red herring. This is where you intentionally or unintentionally change the subject. Often it happens when a politician doesn’t want to answer a question. “I don’t want to talk about jobs, I want to talk about the brave men and women who serve in our nation’s proud military…” It’s a great way to get around having to answer a question.

    A Red Herring is sometimes hard to distinguish from a Straw Figure. Let’s focus on the key difference for one second. In a straw figure, the offender is attacking an irrelevant argument instead of the actual argument of their opponent. In a red herring, the offender is introducing an irrelevant topic and discussing that instead of the topic at hand. We don’t change topics in a straw figure, we just start talking about a different argument on the same topic.

    From: Knachel, Matthew, "Fundamental Methods of Logic" (2017).

    Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1

    Creative Commons Attribution 4.0 International License

    A fictional example can illustrate the technique. Consider Frank, who, after a hard day at work, heads to the tavern to unwind. He has far too much to drink, and, unwisely, decides to drive home. Well, he’s swerving all over the road, and he gets pulled over by the police. Let’s suppose that Frank has been pulled over in a posh suburb where there’s not a lot of crime. When the police officer tells him he’s going to be arrested for drunk driving, Frank becomes belligerent:

    “Where do you get off? You’re barely even real cops out here in the ’burbs. All you do is sit around all day and pull people over for speeding and stuff. Why don’t you go investigate some real crimes? There’s probably some unsolved murders in the inner city they could use some help with. Why do you have to bother a hard-working citizen like me who just wants to go home and go to bed?”

    Frank is committing the red herring fallacy (and not very subtly). The issue at hand is whether or not he deserves to be arrested for driving drunk. He clearly does. Frank is not comfortable arguing against that position on the merits. So he changes the subject—to one about which he feels like he can score some debating points. He talks about the police out here in the suburbs, who, not having much serious crime to deal with, spend most of their time issuing traffic violations. Yes, maybe that’s not as taxing a job as policing in the city. Sure, there are lots of serious crimes in other jurisdictions that go unsolved. But that’s beside the point! It’s a distraction from the real issue of whether Frank should get a DUI.

    Politicians use the red herring fallacy all the time. Consider a debate about Social Security—a retirement stipend paid to all workers by the federal government. Suppose a politician makes the following argument:

    We need to cut Social Security benefits, raise the retirement age, or both. As the baby boom generation reaches retirement age, the amount of money set aside for their benefits will not be enough cover them while ensuring the same standard of living for future generations when they retire. The status quo will put enormous strains on the federal budget going forward, and we are already dealing with large, economically dangerous budget deficits now. We must reform Social Security.

    Now imagine an opponent of the proposed reforms offering the following reply:

    Social Security is a sacred trust, instituted during the Great Depression by FDR to insure that no hard-working American would have to spend their retirement years in poverty. I stand by that principle. Every citizen deserves a dignified retirement. Social Security is a more important part of that than ever these days, since the downturn in the stock market has left many retirees with very little investment income to supplement government support.

    The second speaker makes some good points, but notice that they do not speak to the assertion made by the first: Social Security is economically unsustainable in its current form. It’s possible to address that point head on, either by making the case that in fact the economic problems are exaggerated or non-existent, or by making the case that a tax increase could fix the problems. The respondent does neither of those things, though; he changes the subject, and talks about the importance of dignity in retirement. I’m sure he’s more comfortable talking about that subject than the economic questions raised by the first speaker, but it’s a distraction from that issue—a red herring.

    Perhaps the most blatant kind of red herring is evasive: used especially by politicians, this is the refusal to answer a direct question by changing the subject. Examples are almost too numerous to cite; to some degree, no politician ever answers difficult questions straightforwardly (there’s an old axiom in politics, put nicely by Robert McNamara: “Never answer the question that is asked of you. Answer the question that you wish had been asked of you.”).

    A particularly egregious example of this occurred in 2009 on CNN’s Larry King Live. Michele Bachmann, Republican Congresswoman from Minnesota, was the guest. The topic was “birtherism,” the (false) belief among some that Barack Obama was not in fact born in America and was therefore not constitutionally eligible for the presidency. After playing a clip of Senator Lindsey Graham (R, South Carolina) denouncing the myth and those who spread it, King asked Bachmann whether she agreed with Senator Graham. She responded thus:

    "You know, it's so interesting, this whole birther issue hasn't even been one that's ever been brought up to me by my constituents. They continually ask me, where's the jobs? That's what they want to know, where are the jobs?”

    Bachmann doesn’t want to respond directly to the question. If she outright declares that the “birthers” are right, she looks crazy for endorsing a clearly false belief. But if she denounces them, she alienates a lot of her potential voters who believe the falsehood. Tough bind. So she blatantly, and rather desperately, tries to change the subject. Jobs! Let’s talk about those instead. Please?

    Irrelevant Appeals

    Any kind of appeal to a factor, consideration, or reason that isn't relevant to the argument at hand (but is used as a reason rather than as a mere distraction—A Red Herring is a distraction, not an irrelevant reason) is called an Irrelevant Appeal. The premises aren’t relevant to the truth or falsity of the conclusion because whether or not the conclusion is true doesn’t depend at all on whether or not the premises are true.

    The core Irrelevant Appeals to Know:

    • Appeal to Unqualified/False Authority
    • Appeal to Force
    • Appeal to Popularity/to the People/Bandwagon
    • Appeal to Consequences

    Appeal to Unqualified Authority

    Note that this is sometimes called the "Appeal to Authority”, but we trust authorities all the time about lots of things and we're right to do so. The fallacy is when we trust an authority on one subject (or perhaps someone who is not an authority on anything at all) to speak on another subject.

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{4}\): No matter the fact that you’re my elder, Mr. Turkey, you’re no expert on Quantum Physics! (Image Credit: Otto Speckter in Picture Fables)

    From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    In a society like ours, we have to rely on authorities to get on in life. For example, the things I believe about electrons are not things that I have ever verified for myself. Rather, I have to rely on the testimony and authority of physicists to tell me what electrons are like. Likewise, when there is something wrong with my car, I have to rely on a mechanic (since I lack that expertise) to tell me what is wrong with it. Such is modern life. So there is nothing wrong with needing to rely on authority figures in certain fields (people with the relevant expertise in that field)—it is inescapable. The problem comes when we invoke someone whose expertise is not relevant to the issue for which we are invoking it. For example, suppose that a group of doctors sign a petition to prohibit abortions, claiming that abortions are morally wrong. If Bob cites that fact that these doctors are against abortion, therefore abortion must be morally wrong, then Bob has committed the appeal to authority fallacy. The problem is that doctors are not authorities on what is morally right or wrong. Even if they are authorities on how the body works and how to perform certain procedures (such as abortion), it doesn’t follow that they are authorities on whether or not these procedures should be performed—the ethical status of these procedures. It would be just as much an appeal to consequences fallacy if Melissa were to argue that since some other group of doctors supported abortion, that shows that it must be morally acceptable. In either case, since doctors are not authorities on moral issues, their opinions on a moral issue like abortion is irrelevant. In general, an appeal to authority fallacy occurs when someone takes what an individual says as evidence for some claim, when that individual has no particular expertise in the relevant domain (even if they do have expertise in some other, unrelated, domain).

    Appeal to Force

    An appeal to force is an irrelevant appeal because it apparently argues that some proposition is true, but uses as justification for that claim a threat on the listener. If you don’t believe this, then you will suffer bad consequences. But that’s not a reason to believe the proposition. That’s a reason to make yourself believe it or to act as if you believe it. A good argument actually gives you reason to believe the conclusion and an appeal to force does no such thing!

    The following is from: Knachel, Matthew, "Fundamental Methods of Logic" (2017).

    Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1

    Creative Commons Attribution 4.0 International License

    Perhaps the least subtle of the fallacies is the appeal to force, in which you attempt to convince your interlocutor to believe something by threatening him. Threats pretty clearly distract one from the business of dispassionately appraising premises’ support for conclusions, so it’s natural to classify this technique as a Fallacy of Distraction.

    There are many examples of this technique throughout history. In totalitarian regimes, there are often severe consequences for those who don’t toe the party line (see George Orwell’s 1984 for a vivid, though fictional, depiction of the phenomenon). The Catholic Church used this technique during the infamous Spanish Inquisition: the goal was to get non-believers to accept Christianity; the method was to torture them until they did.

    An example from much more recent history: when it became clear in 2016 that Donald Trump would be the Republican nominee for president, despite the fact that many rank-and-file Republicans thought he would be a disaster, the Chairman of the Republican National Committee (allegedly) sent a message to staffers informing them that they could either support Trump or leave their jobs. Not a threat of physical force, but a threat of being fired; same technique.

    Again, the appeal to force is not usually subtle. But there is a very common, very effective debating technique that belongs under this heading, one that is a bit less overt than explicitly threatening someone who fails to share your opinions. It involves the sub-conscious, rather than conscious, perception of a threat.

    Here’s what you do: during the course of a debate, make yourself physically imposing; sit up in your chair, move closer to your opponent, use hand gestures, like pointing right in their face; cut them off in the middle of a sentence, shout them down, be angry and combative. If you do these things, you’re likely to make your opponent very uncomfortable—physically and emotionally. They might start sweating a bit; their heart may beat a little faster. They’ll get flustered and maybe trip over their words. They may lose their train of thought; winning points they may have made in the debate will come out wrong or not at all. You’ll look like the more effective debater, and the audience’s perception will be that you made the better argument.

    But you didn’t. You came off better because your opponent was uncomfortable. The discomfort was not caused by an actual threat of violence; on a conscious level, they never believed you were going to attack them physically. But you behaved in a way that triggered, at the sub-conscious level, the types of physical/emotional reactions that occur in the presence of an actual physical threat. This is the more subtle version of the appeal to force. It’s very effective and quite common (watch cable news talk shows and you’ll see it; Bill O’Reilly is the master).

    Ad Populum

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{5}\): I don’t care how popular bear jousting is, it’s just wrong! (Image Credit: Otto Speckter in Picture Fables)

    Appeal to the People, to Popularity, Nose-Counting Fallacy, Bandwagon Fallacy, argumentum ad populum are all names for the same thing: appealing to the popularity of a thing or idea or practice in order to justify that thing or idea or practice. In an argument, one appeals to the popularity of a conclusion and then uses that popularity as a basis for inferring that the conclusion is true.

    The popularity of a new smartphone or computer might be used to justify it’s status as the best available. The popularity of a politician might be used to justify the claim that they should be President. The popularity of a person might be used to attempt to exonerate them from a crime or protect them from criticism. In each case, mere popularity doesn’t mean we should believe something is good or worthy of special consideration.

    The popularity of belief in God might be used as evidence that God exists. After all, that many people can’t be wrong, right? Alternatively, the popularity among scientists of belief in an atheistic universe might be used as evidence that God doesn’t exist. After all, that many scientists can’t be wrong, can they?

    In reality, the popularity of a belief doesn’t give us reason to think that belief is true. After all, there have been lots of popular ideas in the past that turned out to be not only false, but morally abhorrent!

    Appeal to Consequences

    Appeal to consequences is yet another “irrelevant appeal” vice. Again something which isn’t relevant to the truth or falsity of the conclusion is appealed to in arguing for that conclusion. It won’t help though, since it’s not relevant!

    From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    The appeal to consequences fallacy is like the reverse of the genetic fallacy: whereas the genetic fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the origin of the idea, the appeal to consequences fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the (typically negative) consequences of accepting that idea. For example, suppose that the results of a study revealed that there are IQ differences between different races (this is a fictitious example, there is no such study that I know of). In debating the results of this study, one researcher claims that if we were to accept these results, it would lead to increased racism in our society, which is not tolerable. Therefore, these results must not be right since if they were accepted, it would lead to increased racism. The researcher who responded in this way has committed the appeal to consequences fallacy. Again, we must assess the study on its own merits. If there is something wrong with the study, some flaw in its design, for example, then that would be a relevant criticism of the study. However, the fact that the results of the study, if widely circulated, would have a negative effect on society is not a reason for rejecting these results as false. The consequences of some idea (good or bad) are irrelevant to the truth or reasonableness of that idea.

    Notice that the researchers, being convinced of the negative consequences of the study on society, might rationally choose not to publish the study (for fear of the negative consequences). This is totally fine and is not a fallacy. The fallacy consists not in choosing not to publish something that could have adverse consequences, but in claiming that the results themselves are undermined by the negative consequences they could have. The fact is, sometimes truth can have negative consequences and falsehoods can have positive consequences. This just goes to show that the consequences of an idea are irrelevant to the truth or reasonableness of an idea.

    The Fallacy Fallacy

    Perhaps the most important vice to be aware of goes by the name: the Fallacy Fallacy! Remember that most other textbooks call these vices “fallacies” and remember that at the beginning of the chapter we said that whether or not one’s opponent argues virtuously is irrelevant to whether or not one’s opponent is in fact correct in their conclusion. They might believe the right thing for wrong reasons or they might have good reasons that just don’t come through clearly when they try to explain their beliefs. Here’s an example of the fallacy fallacy:

    Example \(\PageIndex{2}\)

    Person E: My opponent has argued that we should lower taxes because it would stimulate commerce. I think we should be focusing on the war we’ve been fighting at great cost instead of arguing about whether or not lower taxes would stimulate the economy.

    Person F: Well clearly my opponent has never taken a logic and critical thinking class, because they have just committed a grievous sin against reasoning: the red herring fallacy. I, therefore, conclude that we should lower taxes.

    Person E is indeed guilty of a red herring: they changed the subject to something irrelevant to the original topic. We started talking about an inference from “lowering taxes would stimulate the economy” to “we should lower taxes.” But by the end of Person E’s speech, we were talking about something different: a costly war our nation is fighting. The topic has changed.

    That being true, though, doesn’t mean that Person E is wrong about their conclusion. If Person E wants to cut spending on wars or raise taxes to pay for them, their reasoning badly in one particular instance does not mean that their position is wrong. It may well be that we should raise taxes. Person E just isn’t the best representative of the view. Person F doesn’t get my vote either, though, because they don’t understand a basic truth of reasoning: just because an argument for a position is bad, doesn’t mean that position is wrongheaded or incorrect.

    The Fallacy Fallacy happens when someone uses the fact that a fallacy was committed to justify rejecting the conclusion of the fallacious argument. Avoid this sort of thinking. The fallacy fallacy might count as a vice of relevance, so we’ll include it in that category for our purposes here.

    Vices of Presumption

    The vices in the previous section were all various examples of failing to make arguments that are relevant to the topic or argument at hand. The vices in this section have a similar unifying theme, in which something is being presumed in the premises that allows the conclusion to be inferred. That something—the presumption of the argument—is in each case not warranted. If we sneak in an assumption without actually justifying that assumption, then we’re creating the illusion that we’ve given good reasons for what we believe, when in fact we have only presumed what we believe. Try not to presume!

    Vices of presumption are all shortfalls in thinking which problematically presume their conclusion to be true in the set up or assumptions of the argument. A funny example of presumption (my classmates did this as a joke when I was in elementary school): the complex question. For instance, you could ask "does your mom know that you do drugs?" You would be presuming that the recipient of the question does drugs because you're only asking about their mother's knowledge. Other examples are "when are you going to stop stealing my food?" and "how do you justify to yourself that you lie to everyone all the time?". In each case, facts are being presumed that have not been agreed on as facts! This helps us get a sense of what presumption is and why it might be a problem.

    Inequity in Evaluating Evidence

    Consider someone who thinks whole milk ice cream is superior to frozen yogurt. Whenever someone presents evidence of the health benefits or excellent flavor in frozen yogurt, they scrutinize the evidence with great skepticism, looking for every little reason to reject the evidence. They demand near scientific thresholds to make the case for frozen yogurt. But when it comes to evidence for their own love of whole milk ice cream, they are willing to accept even anecdotal testimony or hasty statistics as bolstering their argument. What has gone wrong in this situation?

    The ice cream lover is someone who applies one standard of evidence to evidence against their position, and another standard of evidence to evidence that favors their position. This is a way of presuming one is right before the evidence has been heard, such that the evaluation of the evidence serves to make sure the “right” conclusion results.

    This is not how an intellectually honest and humble thinker approaches matters. They want the truth, even if it requires admitting their mistake. Thinkers disposed to virtue will be even-handed when assessing evidence, especially evidence supporting views different from their own. They will not favor evidence that supports their belief simply because it supports their belief, nor will they discount evidence that undermines their belief simply because it undermines it.

    Inequity in evaluating evidence is typically is an expression of a deeper character vice in humans, confirmation bias. We will learn more about confirmation bias in Chapter 8.1 "Confirmation Bias". Confirmation bias is a psychological handicap in humans that once we believe something, it is easier for us to keep believing it rather than change our minds. Thus we evaluate evidence unequally because our brains are predisposed to hold on to what we already believe rather than give credence to possibilities that would require us to change our minds.

    Also in Chapter 8.5 "Texas Sharpshooter" we’ll learn about a fallacy of inductive reasoning nicknamed after a tall tale about a Texas sharpshooter. This fallacy is related to inequity in evaluating evidence, but the two vices are subtly different. Inequity in evaluating evidence is primarily about how we presume the evidence should be judged—evidence against us should be judged more stringently, while evidence in our favor should be judged more leniently. As we’ll see when we learn about the Texas sharpshooter fallacy, that vice more describes a pattern of (fallacious) inductive reasoning in which we start from our conclusion and select evidence that supports it (rather than virtuous induction, where we start from our evidence and infer a conclusion). A virtuous thinker allows new evidence to dictate how they understand what conclusion is the most reasonable one. But you should see all these vices as a family: they’re different ways of not thinking well about evidence. They are also different ways of not displaying the virtues of curiosity and honesty.

    False Dilemma/Black and White

    From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    Suppose I were to argue as follows:

    Raising taxes on the wealthy will either hurt the economy or it will help it. But it won’t help the economy. Therefore it will hurt the economy.

    The standard form of this argument is:

    1. Either raising taxes on the wealthy will hurt the economy or it will help it.

    2. Raising taxes on the wealthy won’t help the economy.

    3. Therefore, raising taxes on the wealthy will hurt the economy.

    This argument contains a fallacy called a “false dichotomy.” A false dichotomy is simply a disjunction that does not exhaust all of the possible options. In this case, the problematic disjunction is the first premise: either raising the taxes on the wealthy will hurt the economy or it will help it. But these aren’t the only options. Another option is that raising taxes on the wealthy will have no effect on the economy. Notice that the argument above has the form of a disjunctive syllogism:

    \[A \vee B \\ \sim A \\ \therefore B \nonumber\]

    However, since the first premise presents two options as if they were the only two options, when in fact they aren’t, the first premise is false and the argument fails. Notice that the form of the argument is perfectly good—the argument is valid. The problem is that this argument isn’t sound because the first premise of the argument commits the false dichotomy fallacy. False dichotomies are commonly encountered in the context of a disjunctive syllogism or constructive dilemma (see chapter 2).

    In a speech made on April 5, 2004, President Bush made the following remarks about the causes of the Iraq war:

    Saddam Hussein once again defied the demands of the world. And so I had a choice: Do I take the word of a madman, do I trust a person who had used weapons of mass destruction on his own people, plus people in the neighborhood, or do I take the steps necessary to defend the country? Given that choice, I will defend America every time.

    The false dichotomy here is the claim that:

    Either I trust the word of a madman or I defend America (by going to war against Saddam Hussein’s regime).

    The problem is that these aren’t the only options. Other options include ongoing diplomacy and economic sanctions. Thus, even if it true that Bush shouldn’t have trusted the word of Hussein, it doesn’t follow that the only other option is going to war against Hussein’s regime. (Furthermore, it isn’t clear in what sense this was needed to defend America.) That is a false dichotomy.

    As with all the previous informal fallacies we’ve considered, the false dichotomy fallacy requires an understanding of the concepts involved. Thus, we have to use our understanding of world in order to assess whether a false dichotomy fallacy is being committed or not.

    Begging the Question

    From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License.

    Consider the following argument:

    Capital punishment is justified for crimes such as rape and murder because it is quite legitimate and appropriate for the state to put to death someone who has committed such heinous and inhuman acts.

    The premise indicator, “because” denotes the premise and (derivatively) the conclusion of this argument. In standard form, the argument is this:

    1. It is legitimate and appropriate for the state to put to death someone who commits rape or murder.

    2. Therefore, capital punishment is justified for crimes such as rape and murder.

    You should notice something peculiar about this argument: the premise is essentially the same claim as the conclusion. The only difference is that the premise spells out what capital punishment means (the state putting criminals to death) whereas the conclusion just refers to capital punishment by name, and the premise uses terms like “legitimate” and “appropriate” whereas the conclusion uses the related term, “justified.” But these differences don’t add up to any real differences in meaning. Thus, the premise is essentially saying the same thing as the conclusion. This is a problem: we want our premise to provide a reason for accepting the conclusion. But if the premise is the same claim as the conclusion, then it can’t possibly provide a reason for accepting the conclusion! Begging the question occurs when one (either explicitly or implicitly) assumes the truth of the conclusion in one or more of the premises. Begging the question is thus a kind of circular reasoning.

    One interesting feature of this fallacy is that formally there is nothing wrong with arguments of this form. Here is what I mean. Consider an argument that explicitly commits the fallacy of begging the question. For example,

    1. Capital punishment is morally permissible

    2. Therefore, capital punishment is morally permissible

    Now, apply any method of assessing validity to this argument and you will see that it is valid by any method. If we use the informal test (by trying to imagine that the premises are true while the conclusion is false), then the argument passes the test, since any time the premise is true, the conclusion will have to be true as well (since it is the exact same statement). Likewise, the argument is valid by our formal test of validity, truth tables. But while this argument is technically valid, it is still a really bad argument. Why? Because the point of giving an argument in the first place is to provide some reason for thinking the conclusion is true for those who don’t already accept the conclusion. But if one doesn’t already accept the conclusion, then simply restating the conclusion in a different way isn’t going to convince them. Rather, a good argument will provide some reason for accepting the conclusion that is sufficiently independent of that conclusion itself. Begging the question utterly fails to do this and this is why it counts as an informal fallacy. What is interesting about begging the question is that there is absolutely nothing wrong with the argument formally.

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{6}\): C’mon dog, you should trust me, my friend Rosco will tell you I’m trustworthy. I can vouch for Rosco. He’s a good guy. (Image Credit: Otto Speckter in Picture Fables)

    Whether or not an argument begs the question is not always an easy matter to sort out. As with all informal fallacies, detecting it requires a careful understanding of the meaning of the statements involved in the argument. Here is an example of an argument where it is not as clear whether there is a fallacy of begging the question:

    Christian belief is warranted because according to Christianity there exists a being called “the Holy Spirit” which reliably guides Christians towards the truth regarding the central claims of Christianity.1

    One might think that there is a kind of circularity (or begging the question) involved in this argument since the argument appears to assume the truth of Christianity in justifying the claim that Christianity is true. But whether or not this argument really does beg the question is something on which there is much debate within the sub-field of philosophy called epistemology (“study of knowledge”). The philosopher Alvin Plantinga argues persuasively that the argument does not beg the question, but being able to assess that argument takes patient years of study in the field of epistemology (not to mention a careful engagement with Plantinga’s work). As this example illustrates, the issue of whether an argument begs the question requires us to draw on our general knowledge of the world. This is the mark of an informal, rather than formal, fallacy.

    Burden of Proof Shifting

    Sometimes we have a responsibility to offer evidence or proof for a claim we believe in. If I believe in dragons, then most people would think I’m responsible for proving that they exist if I expect anyone else to join me in believing in them.

    Alternatively, if I believe that drivers must obey the rules of the road, most people wouldn’t think I’d have to offer any justification for that belief if I brought it up in normal conversation.

    Sometimes we have the burden of proof, but other times we do not. Here’s a conversation:

    Aisha: I think an alien spacecraft came and kidnapped my dog last night.

    Rashid: What makes you think that?

    Aisha: Well, can you prove that they didn’t?

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{7}\): (Image Credit: Otto Speckter in Picture Fables)

    Something has gone wrong here, right? Aisha is making a sort of mistake: she’s making an outlandish claim, but refuses to defend it or offer evidence or reasons for believing it. The vice of Burden Shifting is when one decides that someone else must prove them wrong when in reality they are the person with the burden of proof: one should prove oneself right!

    As a general rule, whenever someone makes a positive claim about the world (like aliens kidnapped my dog), they should offer evidence or reason for believing that claim. When one makes a negative claim (like aliens didn’t kidnap your dog), it most of the time doesn’t feel like they’re in the same position. It seems like they don’t have to prove the negative claim unless there’s already some good reason to believe in the positive claim.

    This rule isn’t perfect, since sometimes a belief is so commonsense that it need not be proved, but it seems to be a good general norm for where the burden of proof lies.

    fig-ch01_patchfile_01.jpg
    Figure \(\PageIndex{8}\): (Credit: Phil Stilwell CC-License)

    Alternatively, as a general rule the least plausible claim has the highest burden of proof. Since plausibility of a claim depends on all of our other beliefs, though, this is hard to adjudicate sometimes. That is fancy speak for the following idea: whoever is making the wilder claim or the claim that we’re less likely to believe right away is the one with the burden of proof. This is a matter, though, of the norms of the culture we live in. In a racist society, egalitarian ideals are the ones which are “less plausible” to the elites, so they would demand more proof from someone making a claim that to us is obviously correct: that human beings are essentially equal regardless of their race. This presents a bit of a problem for those who want to use “plausibility” to decide who has the burden of proof. It suffices to say, for now, that this is simply complex and difficult to figure out.


    [1] Quassim Cassam, Vice Epistemology, The Monist, Vol. 99, No. 2, Virtues (April, 2016), pp. 159-180


    This page titled 4.3: Some Intellectual Vices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform.

    • Was this article helpful?