9.5: Fallacies of Induction
The fallacies of induction are all failures in reasoning about the messy world of cause and effect, contingent facts of the universe, and generalizations about kinds of things in the world. In each case, an argument is put forth using evidence incorrectly, or making bad predictions, or generalizing improperly.
Appeal to Ignorance
We can’t prove either way whether there is or isn’t an all-powerful god, you might suppose. (Intelligent people disagree). If that’s true, then it seems like we’re free to believe in an all-powerful loving god, right? Well... in a sense. You’re free to believe whatever you like, but we shouldn’t pretend that you’re justified in believing in a god on the basis of the lack of proof against the existence of god. You could cite the exact same evidence (i.e. the lack of conclusive evidence for either side) to justify believing in no god. Or, as is sometimes done, to justify believing in a Flying Spaghetti Monster that created the universe.
Whether or not you believe in a creator god, therefore, cannot depend on the seeming fact that no one has proved that there isn’t a god. There might be other reasons for believing in a god, but this ain’t one of them.
This example illustrates the structure of the argument from ignorance. Essentially, the basic argument pattern looks like this:
We don’t know whether proposition x is true or false
Therefore, it’s true
or
Therefore, it’s false
This is a bad argument pattern because the fact that we don’t know the truth of the matter is reason for withholding judgment and not coming to hold a determinate belief. It’s not reason for or against believing something.
Sometimes, it’s okay not to have an opinion or belief about a particular subject matter.
The following is from: Knachel, Matthew, "Fundamental Methods of Logic" (2017).
Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1
Creative Commons Attribution 4.0 International License
This is a particularly egregious and perverse fallacy. In essence, it’s an inference from premises to the effect that there’s a lack of knowledge about some topic to a definite conclusion about that topic. We don’t know; therefore, we know!
Of course, put that baldly, it’s plainly absurd; actual instances are more subtle. The fallacy comes in a variety of closely related forms. It will be helpful to state them in bald/absurd schematic fashion first, then elucidate with more subtle real-life examples.
The first form can be put like this:
\[\begin{align*}& \underline{\text{Nobody knows how to explain phenomenon X.}}\\ & \therefore \text{ My crazy theory about X is true.}\end{align*}\]
That sounds silly, but consider an example: those “documentary” programs on cable TV about aliens. You know, the ones where they suggest that extraterrestrials built the pyramids or something (there are books and websites, too). How do they get you to believe that crazy theory? By creating mystery! By pointing to facts that nobody can explain. The Great Pyramid at Giza is aligned (almost) exactly with the magnetic north pole! On the day of the summer solstice, the sun sets exactly between two of the pyramids! The height of the Great Pyramid is (almost) exactly one one-millionth the distance from the Earth to the Sun! How could the ancient Egyptians have such sophisticated astronomical and geometrical knowledge? Why did the Egyptians, careful record-keepers in (most) other respects, (apparently) not keep detailed records of the construction of the pyramids? Nobody knows. Conclusion: aliens built the pyramids.
In other words, there are all sorts of (sort of) surprising facts about the pyramids, and nobody knows how to explain them. From these premises, which establish only our ignorance, we’re encouraged to conclude that we know something: aliens built the pyramids. That’s quite a leap—too much of a leap.
Another form this fallacy takes can be put crudely thus:
\[\begin{align*}& \underline{\text{Nobody can PROVE that I’m wrong.}}\\ & \therefore \text{ I’m right.}\end{align*}\]
The word ‘prove’ is in all-caps because stressing it is the key to this fallacious argument: the standard of proof is set impossibly high, so that almost no amount of evidence would constitute a refutation of the conclusion.
An example will help. There are lots of people who claim that evolutionary biology is a lie: there’s no such thing as evolution by natural selection, and it’s especially false to claim that humans evolved from earlier species, that we share a common ancestor with apes. Rather, the story goes, the Bible is literally true: the Earth is only about 6,000 years old, and humans were created as-is by God just as the Book of Genesis describes. The Argument from Ignorance is one of the favored techniques of proponents of this view. They are especially fond of pointing to “gaps” in the fossil record—the so-called “missing link” between humans and a pre-human, ape-like species—and claim that the incompleteness of the fossil record vindicates their position.
But this argument is an instance of the fallacy. The standard of proof—a complete fossil record without any gaps—is impossibly high. Evolution has been going on for a LONG time (the Earth is actually about 4.5 billion years old, and living things have been around for at least 3.5 billion years). So many species have appeared and disappeared over time that it’s absurd to think that we could even come close to collecting fossilized remains of anything but the tiniest fraction of them. It’s hard to become a fossil, after all: a creature has to die under special circumstances to even have a chance for its remains to do anything than turn into compost. And we haven’t been searching for fossils in a systematic way for very long (only since the mid-1800s or so). It’s no surprise that there are gaps in the fossil record, then. What’s surprising, in fact, is that we have as rich a fossil record as we do. Many, many transitional species have been discovered, both between humans and their ape-like ancestors, and between other modern species and their distant forbears (whales used to be land-based creatures, for example; we know this (in part) from the fossils of early proto-whale species with longer and longer rear hip- and leg-bones).
We will never have a fossil record complete enough to satisfy skeptics of evolution. But their standard is unreasonably high, so their argument is fallacious. Sometimes they put it even more simply: nobody was around to witness evolution in action; therefore, it didn’t happen. This is patently absurd, but it follows the same pattern: an unreasonable standard of proof (witnesses to evolution in action; impossible, since it takes place over such a long period of time), followed by the leap to the unwarranted conclusion.
Yet another version of the Argument from Ignorance goes like this:
\[\begin{align*}& \underline{\text{I can’t imagine/understand how X could be true.}}\\ & \therefore \text{ X is false.}\end{align*}\]
Of course lack of imagination on the part of an individual isn’t evidence for or against a proposition, but people often argue this way. A (hilarious) example comes from the rap duo Insane Clown Posse in their 2009 single, “Miracles”. Here’s the line:
Water, fire, air and dirt
F**king magnets, how do they work?
And I don’t wanna talk to a scientist
Y’all mother**kers lying, and getting me pissed.
Violent J and Shaggy 2 Dope can’t understand how there could be a scientific, non-miraculous explanation for the workings of magnets. They conclude, therefore, that magnets are miraculous.
A final form of the Argument from Ignorance can be put crudely thus:
\[\begin{align*}& \underline{\text{No evidence has been found that X is true.}}\\ & \therefore \text{ X is false.}\end{align*}\]
You may have heard the slogan, “Absence of evidence is not evidence of absence.” This is an attempt to sum up this version of the fallacy. But it’s not quite right. What it should say is that absence of evidence is not always definitive evidence of absence. An example will help illustrate the idea. During the 2016 presidential campaign, a reporter (David Fahrentold) took to Twitter to announce that despite having “spent weeks looking for proof that [Donald Trump] really does give millions of his own [money] to charity...” he could only find one donation, to the NYC Police Athletic League. Trump has claimed to have given millions of dollars to charities over the years. Does this reporter’s failure to find evidence of such giving prove that Trump’s claims about his charitable donations are false? No. To rely only on this reporter’s testimony to draw such a conclusion would be to commit the fallacy.
However, the failure to uncover evidence of charitable giving does provide some reason to suspect Trump’s claims may be false. How much of a reason depends on the reporter’s methods and credibility, among other things.9 But sometimes a lack of evidence can provide strong support for a negative conclusion. This is an inductive argument; it can be weak or strong. For example, despite multiple claims over many years (centuries, if some sources can be believed), no evidence has been found that there’s a sea monster living in Loch Ness in Scotland. Given the size of the body of water, and the extensiveness of the searches, this is pretty good evidence that there’s no such creature—a strong inductive argument to that conclusion. To claim otherwise—that there is such a monster, despite the lack of evidence—would be to commit the version of the fallacy whereby one argues “You can’t PROVE I’m wrong; therefore, I’m right,” where the standard of proof is unreasonably high.
One final note on this fallacy: it’s common for people to mislabel certain bad arguments as arguments from ignorance; namely, arguments made by people who obviously don’t know what the heck they’re talking about. People who are confused or ignorant about the subject on which they’re offering an opinion are liable to make bad arguments, but the fact of their ignorance is not enough to label those arguments as instances of the fallacy. We reserve that designation for arguments that take the forms canvassed above: those that rely on ignorance—and not just that of the arguer, but of the audience as well—as a premise to support the conclusion.
Slippery Slope
From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195 Creative Commons Attribution 4.0 International License.
The causal slippery slope fallacy is committed when one event is said to lead to some other (usually disastrous) event via a chain of intermediary events. If you have ever seen Direct TV’s “get rid of cable” commercials, you will know exactly what I’m talking about. (If you don’t know what I’m talking about you should Google it right now and find out. They’re quite funny.) Here is an example of a causal slippery slope fallacy (it is adapted from one of the Direct TV commercials):
If you use cable, your cable will probably go on the fritz. If your cable is on the fritz, you will probably get frustrated. When you get frustrated you will probably hit the table. When you hit the table, your young daughter will probably imitate you. When your daughter imitates you, she will probably get thrown out of school. When she gets thrown out of school, she will probably meet undesirables. When she meets undesirables, she will probably marry undesirables. When she marries undesirables, you will probably have a grandson with a dog collar. Therefore, if you use cable, you will probably have a grandson with dog collar.
This example is silly and absurd, yes. But it illustrates the causal slippery slope fallacy. Slippery slope fallacies are always made up of a series of conjunctions of probabilistic conditional statements that link the first event to the last event. A causal slippery slope fallacy is committed when one assumes that just because each individual conditional statement is probable, the conditional that links the first event to the last event is also probable. Even if we grant that each “link” in the chain is individually probable, it doesn’t follow that the whole chain (or the conditional that links the first event to the last event) is probable. Suppose, for the sake of the argument, we assign probabilities to each “link” or conditional statement, like this. (I have italicized the consequents of the conditionals and assigned high conditional probabilities to them. The high probability is for the sake of the argument; I don’t actually think these things are as probable as I’ve assumed here.)
- If you use cable, then your cable will probably go on the fritz (.9)
- If your cable is on the fritz, then you will probably get angry (.9)
- If you get angry, then you will probably hit the table (.9)
- If you hit the table, your daughter will probably imitate you (.8)
- If your daughter imitates you, she will probably be kicked out of school (.8)
- If she is kicked out of school, she will probably meet undesirables (.9)
- If she meets undesirables, she will probably marry undesirables (.8)
- If she marries undesirables, you will probably have a grandson with a dog collar (.8)
However, even if we grant the probabilities of each link in the chain is high (80-90% probable), the conclusion doesn’t even reach a probability higher than chance. Recall that in order to figure the probability of a conjunction, we must multiply the probability of each conjunct:
\[(.9) \times (.9) \times (.9) \times (.8) \times (.8) \times (.9) \times (.8) \times (.8) = .27\nonumber\]
That means the probability of the conclusion (i.e., that if you use cable, you will have a grandson with a dog collar) is only 27%, despite the fact that each conditional has a relatively high probability!
Texas Sharpshooter
As the story goes, once there was a man in Texas who shot at his barn door with his rifle. When he had unloaded ten rounds, he walked up the to door, found a cluster of bullet holes that were particularly tightly clustered, and painted a bullseye around them. No one would credit him with being a sharpshooter, would they? After all, he didn’t actually aim at a bullseye and hit it. He drew the bullseye around his shots!
This story relates to a particular way of using evidence to demonstrate a conclusion. Normally, we would hope that someone would take in all of the available evidence about a particular subject, weigh its relative credibility, and then come to a conclusion. The Texas Sharpshooter fallacy happens when someone already knows which conclusion they’d like to prove and then selects evidence which supports that conclusion. They’ve done the process backwards. The analogy is a little weird, but the idea is that the painting of the bullseye is selecting which evidence to take into account. If you only weigh the evidence which supports the conclusion you like (or in the story, if you only draw the target around the bullet holes that looked good) then you’d be disregarding other evidence for no other reason than that it got in the way of you concluding what you wanted to conclude.
The paradigm example of this is when you let your confidence in a particular conclusion change the way you treat evidence. Here’s an example:
I know that Vaccines cause autism, so the multiple review articles concluding that there is no link between the two must have been bought and paid for by big pharma!
Instead of looking at the evidence and letting it determine what conclusion we draw, instead we’re letting our fixed conclusion determine how we treat the evidence! It’s backwards, just like Texas Sharpshooting!
This fallacy might also be called the fallacy of Cherry-Picking Evidence because you’ve selected only some evidence (you’ve “cherry picked”).
Here’s an example. Say you think vaccines are unsafe. If that’s a belief you’re committed to, you might only pay attention to evidence like anecdotes about apparent vaccine injuries, the medical professionals who make claims about vaccines being dangerous, and the apparent empirical evidence that connects vaccines and illnesses of various kinds. You’d likely ignore, discount, or explain away all of the evidence which seems to show that vaccines aren’t connected with illness or injury in a significant way. This isn’t so much a judgment on this particular belief as it is a description of what might be going on in someone’s mind.
The core problem here is letting your desired conclusion determine which evidence you take into account or how you treat evidence.
Any use of anecdotal evidence—a single one-off story about individuals that is supposed to be evidence for a general claim—is likely to be an instance of this fallacy, since it’s usually easy to find an anecdote to support any claim. Anecdotes aren’t evidence for general claims. At best, they’re illustrations of general points.
Here’s another example of using selective evidence to get the conclusion you want:
From: Knachel, Matthew, "Fundamental Methods of Logic" (2017).
Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1
Creative Commons Attribution 4.0 International License
Quoting out of Context
Another way to obscure or alter the meaning of what someone actually said is to quote them selectively. Remarks taken out of their proper context might convey a different meaning than they did within that context.
Consider a simple example: movie ads. These often feature quotes from film critics, which are intended to convey the impression that the movie was well-liked by them. “Critics call the film ‘unrelenting’, ‘amazing’, and ‘a one-of-a-kind movie experience’”, the ad might say. That sounds like pretty high praise. I think I’d like to see that movie. That is, until I read the actual review from which those quotes were pulled:
I thought I’d seen it all at the movies, but even this jaded reviewer has to admit that this film is something new, a one-of-a-kind movie experience: two straight hours of unrelenting, snooze-inducing mediocrity. I find it amazing that not one single aspect of this movie achieves even the level of “eh, I guess that was OK.”
The words ‘unrelenting’ and ‘amazing’—and the phrase ‘a one-of-a-kind movie experience’—do in fact appear in that review. But situated in their original context, they’re doing something completely different than the movie ad would like us to believe.
Politicians often quote each other out of context to make their opponents look bad. In the 2012 presidential campaign, both sides did it rather memorably. The Romney campaign was trying to paint President Obama as anti-business. In a campaign speech, Obama once said the following:
If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there. If you’ve got a business, you didn’t build that. Somebody else made that happen.
Yikes! What an insult to all the hard-working small-business owners out there. They didn’t build their own businesses? The Romney campaign made some effective ads, with these remarks playing in the background, and small-business people describing how they struggled to get their firms going. The problem is, that quote above leaves some bits out—specifically, a few sentences before the last two. Here’s the full transcript:
If you’ve been successful, you didn’t get there on your own. You didn’t get there on your own. I’m always struck by people who think, well, it must be because I was just so smart. There are a lot of smart people out there. It must be because I worked harder than everybody else. Let me tell you something: there are a whole bunch of hardworking people out there.
If you were successful, somebody along the line gave you some help. There was a great teacher somewhere in your life. Somebody helped to create this unbelievable American system that we have that allowed you to thrive. Somebody invested in roads and bridges. If you’ve got a business, you didn’t build that. Somebody else made that happen.
Oh. He’s not telling business owners that they didn’t build their own businesses. The word ‘that’ in “you didn’t build that” doesn’t refer to the businesses; it refers to the roads and bridges—the “unbelievable American system” that makes it possible for businesses to thrive. He’s making a case for infrastructure and education investment; he’s not demonizing small-business owners.
The Obama campaign pulled a similar trick on Romney. They were trying to portray Romney as an out-of-touch billionaire, someone who doesn’t know what it’s like to struggle, and someone who made his fortune by buying up companies and firing their employees. During one speech, Romney said: “I like being able to fire people who provide services to me.” Yikes! What a creep. This guy gets off on firing people? What, he just finds joy in making people suffer? Sounds like a moral monster. Until you see the whole speech:
I want individuals to have their own insurance. That means the insurance company will have an incentive to keep you healthy. It also means if you don’t like what they do, you can fire them. I like being able to fire people who provide services to me. You know, if someone doesn’t give me the good service that I need, I want to say I’m going to go get someone else to provide that service to me.
He’s making a case for a particular health insurance policy: self-ownership rather than employer-provided health insurance. The idea seems to be that under such a system, service will improve since people will be empowered to switch companies when they’re dissatisfied—kind of like with cell phones, for example. When he says he likes being able to fire people, he’s talking about being a savvy consumer. I guess he’s not a moral monster after all.
False Cause
Post Hoc Ergo Propter Hoc
(Latin for After something, therefore because of that thing)
Just because something regularly follows another thing, doesn't mean that it is caused by that other thing. As the saying goes, correlation does not imply causation. The False Cause fallacy happens when someone mistakes correlation for causation.
For example, apparently Ice Cream sales and new cases of Polio (before the vaccine) were very closely correlated. Why? Because Ice Cream causes Polio? Luckily, no! I love Ice Cream and I’d be heartbroken to find out that it is the cause of such a tragic disease. It turns out (I’m told) that Polio cases showed up in warm weather more often and of course Ice Cream sales are very closely correlated with warm weather. Correlation does not equal causation!
From: Knachel, Matthew, "Fundamental Methods of Logic" (2017).
Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1
Creative Commons Attribution 4.0 International License
Here’s another fallacy for which people always use the Latin, usually shortening it to ‘post hoc’. The whole phrase translates to ‘After this, therefore because of this’, which is a pretty good summation of the pattern of reasoning involved. Crudely and schematically, it looks like this:
\[\begin{align*} & \underline{\text{X occurred before Y} \ \ } \\ & \therefore \text{ X caused Y.} \end{align*}\]
This is not a good inductive argument. That one event occurred before another gives you some reason to believe it might be the cause—after all, X can’t cause Y if it happened after Y did—but not nearly enough to conclude that it is the cause. A silly example: I, your humble author, was born on June 19th, 1974; this was just shortly before a momentous historical event, Richard Nixon’s resignation of the Presidency on August 9th later that summer. My birth occurred before Nixon’s resignation; but this is (obviously!) not a reason to think that it caused his resignation.
Though this kind of reasoning is obviously shoddy—a mere temporal relationship clearly does not imply a causal relationship—it is used surprisingly often. In 2012, New York Yankees shortstop Derek Jeter broke his ankle. It just so happened that this event occurred immediately after another event, as Donald Trump pointed out on Twitter: “Derek Jeter broke ankle one day after he sold his apartment in Trump World Tower.” Trump followed up: “Derek Jeter had a great career until 3 days ago when he sold his apartment at Trump World Tower- I told him not to sell- karma?” No, Donald, not karma; just bad luck.
Nowhere is this fallacy more in evidence than in our evaluation of the performance of presidents of the United States. Everything that happens during or immediately after their administrations tends to be pinned on them. But presidents aren’t all-powerful; they don’t cause everything that happens during their presidencies. On July 9th, 2016, a short piece appeared in the Washington Post with the headline “Police are safer under Obama than they have been in decades”. What does a president have to do with the safety of cops? Very little, especially compared to other factors like poverty, crime rates, policing practices, rates of gun ownership, etc., etc., etc. To be fair, the article was aiming to counter the equally fallacious claims that increased violence against police was somehow caused by Obama. Another example: in October 2015, US News & World Report published an article asking (and purporting to answer) the question, “Which Presidents Have Been Best for the Economy?” It had charts listing GDP growth during each administration since Eisenhower. But while presidents and their policies might have some effect on economic growth, their influence is certainly swamped by other factors. Similar claims on behalf of state governors are even more absurd. At the 2016 Republican National Convention, Governors Scott Walker and Mike Pence—of Wisconsin and Indiana, respectively—both pointed to record-high employment in their states as vindication of their conservative, Republican policies. But some other states were also experiencing record-high employment at the time: California, Minnesota, New Hampshire, New York, Washington. Yes, they were all controlled by Democrats. Maybe there’s a separate cause for those strong jobs numbers in differently governed states? Possibly it has something to do with the improving economy and overall health of the job market in the whole country?
Hasty Generalization
A hasty generalization is just that: it’s when one generalizes about a group of people or things or events, but one does so too quickly and without enough evidence or with too small of a sample.
If I’m at the grocery store and grab two avocados that happen to be rotten, it would be rash of me to exclaim “What’s up with this place!?!? All of the avocados in this store are rotten!” You need a randomized (so as to be hopefully representative) sample of avocados before you make a generalization about all of the avocados in the store.
Similarly with other generalizations you make. If you’re jumping to conclusions about a whole group of things on the basis of interacting with or observing only a few of those things, there’s a good chance that you’re being hasty in your generalizing.
From: Knachel, Matthew, "Fundamental Methods of Logic" (2017).
Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1
Creative Commons Attribution 4.0 International License
Many inductive arguments involve an inference from particular premises to a general conclusion; this is generalization. For example, if you make a bunch of observations every morning that the sun rises in the east, and conclude on that basis that, in general, the sun always rises in the east, this is a generalization. And it’s a good one! With all those particular sunrise observations as premises, your conclusion that the sun always rises in the east has a lot of support; that’s a strong inductive argument.
One commits the hasty generalization fallacy when one makes this kind of inference based on an insufficient number of particular premises, when one is too quick—hasty—in inferring the general conclusion.
People who deny that global warming is a genuine phenomenon often commit this fallacy. In February of 2015, the weather was unusually cold in Washington, DC. Senator James Inhofe of Oklahoma famously took to the Senate floor wielding a snowball. “In case we have forgotten, because we keep hearing that 2014 has been the warmest year on record, I ask the chair, ‘You know what this is?’ It’s a snowball, from outside here. So it’s very, very cold out. Very unseasonable.” He then tossed the snowball at his colleague, Senator Bill Cassidy of Louisiana, who was presiding over the debate, saying, “Catch this.”
Senator Inhofe commits the hasty generalization fallacy. He’s trying to establish a general conclusion—that 2014 wasn’t the warmest year on record, or that global warming isn’t really happening (he’s on the record that he considers it a “hoax”). But the evidence he presents is insufficient to support such a claim. His evidence is an unseasonable coldness in a single place on the planet, on a single day. We can’t derive from that any conclusions about what’s happening, temperature-wise, on the entire planet, over a long period of time. That the earth is warming is not a claim that everywhere, at every time, it will always be warmer than it was; the claim is that, on average, across the globe, temperatures are rising. This is compatible with a couple of cold snaps in the nation’s capital.
Many people are susceptible to hasty generalizations in their everyday lives. When we rely on anecdotal evidence to make decisions, we commit the fallacy. Suppose you’re thinking of buying a new car, and you’re considering a Subaru. Your neighbor has a Subaru. So what do you do? You ask your neighbor how he likes his Subaru. He tells you it runs great, hasn’t given him any trouble. You then, fallaciously, conclude that Subarus must be terrific cars. But one person’s testimony isn’t enough to justify that conclusion; you’d need to look at many, many more drivers’ experiences to reach such a conclusion (this is why the magazine Consumer Reports is so useful).
A particularly pernicious instantiation of the Hasty Generalization fallacy is the development of negative stereotypes. People often make general claims about religious or racial groups, ethnicities and nationalities, based on very little experience with them. If you once got mugged by a Puerto Rican, that’s not a good reason to think that, in general, Puerto Ricans are crooks. If a waiter at a restaurant in Paris was snooty, that’s no reason to think that French people are stuck up. And yet we see this sort of faulty reasoning all the time.