9.4: Analogical Reasoning
- Page ID
- 223913
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)It may be the most common type of inductive reasoning. It’s sort of built into our cognitive systems. What is it? Reasoning by analogy. We use the known or observed similarities of things to posit further similarities. In this edition of the textbook, we’ll cover this quite briefly. In future editions, I hope to expand this section to a more complete analysis.
The first thing to do is to look at some examples of analogical reasoning or reasoning by analogy:
If you increase the gauge of copper wire, you increase the maximum amperage that wire can conduct. Therefore, since water and electricity share many commonalities particularly with respect to their flow, we should see an increase in flow rate with an increase in pipe diameter.
Electrons and Photons are both point particles, therefore we should expect electrons to travel at or near the speed of light.
My boyfriend is acting just like that character on television that is cheating on their partner, so my boyfriend is probably cheating.
Samsung produces both the Galaxy S8 and the Galaxy S9, so I would expect their basic operating systems to be quite similar.
Northern California and Northwestern Italy have very similar climates, so probably olives and grapes will grow well in Northern California.
One can see how this isn’t perfect reasoning, right? There are going to be loads of counterexamples to each of these types of analogical reasoning. But some of these particular examples turn out to have true conclusions and they seem to have identified a commonality between the two analogues (the things being compared in the analogy) that in fact makes the conclusion true. For instance, it is the similar climates of Northern California and Tuscany that makes for quite similar growing conditions and therefore the success of quite similar crops. It is the properties shared by all point particles that leads them to travel at or near the speed of light (it’s not impossible I’m wrong about this, I’m no physicist). It is the common manufacturer that makes Samsung phones similar. In other words: the commonalities between things often do cause them to have further commonalities.
Arguments by analogy have a certain structure in common. It’s worth investigating this a bit further:
Analogues |
|
---|---|
Attribute(s) in Common |
|
Attribute(s) inferred |
|
Link between attributes |
Let’s take an example from above and analyze it according to this schema:
If you increase the gauge of copper wire, you increase the maximum amperage that wire can conduct. Therefore, since water and electricity share many commonalities particularly with respect to their flow, we should see an increase in flow rate with an increase in pipe diameter.
Now we identify the analogues first:
Analogues |
Electricity flow vs. Water flow |
---|
It can be tricky to get the analogues just right, but the Attribute in Common can help us narrow it down.
Attribute(s) in Common |
Characteristics of flow like amperage/flow rate; voltage/pressure; and resistance/friction. |
---|
Next we identify what is being inferred about the pair of analogues: what else are they supposed to have in common on the basis of their known similarities?
Attribute(s) inferred |
Increase in diameter of conduit equals increase in amperage/flow rate. |
---|
Finally, we might wonder what is it about the Attributes in Common that might lead us to believe that the analogues will share the further attribute(s)? What is the causal link? Here’s my best guess about what’s going on here (again, I’m no physicist):
Link between attributes |
The flow of electrons in fact shares many properties with the flow of water molecules: both pipes and copper wires allow a certain diameter of flowing medium (electrons or water molecules) to flow through one particular slice at a given time. Only a certain amount of electrons and a certain amount of water molecules can fit through a hose/wire of a given length. |
---|
This is a tricky example, since the right answer relies on physics I am not an expert in (not by a long stretch). What matters at this point is that we’re able to identify good and bad analogies by means of these tools of analysis. I won’t expect to find a link between attributes forthcoming in cases of bad analogical reasoning. Here’s an example:
Both of my pairs of shoes are blue, so I’d expect one [Stiletto heels] to perform just as well on a hike as the other [Chaco Sandals].
Let’s break down this analogy using our handy chart:
Analogues |
Stiletto Heels vs. Chaco Sandals |
---|---|
Attribute(s) in Common |
Blue, they are footwear |
Attribute(s) inferred |
Performance on a hike |
Link between attributes |
They are at least footwear, so they will perform better than most nonfootwear. Better than a slice of bread, for instance. But that’s it. |
There is no link between color and function when it comes to shoes and so the analogy is a bad one. Even if the link is unknown or a bit mysterious (again, I’m no physicist, so the chart about electrons is probably a bit off), we can usually still identify something like the right link between attributes if we understand anything about how the analogy works. In the case of a bad analogy, we can make up wild stories, but at the end of the day we’ll most likely realize that there simply is no link between the attributes in question.
What about when the attribute in common isn’t connected to the attribute inferred? Well, it’s probably not a good bit of analogical reasoning, even if it ends up having a true conclusion. Here’s an example of this sort of thing:
Rolexes and Teslas are both expensive. Teslas are very durable, so it follows that Rolexes are very durable.
Let’s break it down:
Analogues |
Rolex watches vs. Tesla cars |
---|---|
Attribute(s) in Common |
They are expensive consumer goods |
Attribute(s) inferred |
Durability |
Link between attributes |
I guess when things are expensive the manufacturer can spend more money on durable materials and the like, but we all know there are durable cheap goods and fragile expensive goods, so we have to say more than just their price point to infer durability. |
Sure, they are both durable, but no, they are not both durable because they are expensive nor vice versa. They are expensive because of market forces like supply and demand, and they are durable because they are well-made. The two don’t really share much of a link, even if there is some sort of a weak link between the two.
In closing, analogical reasoning starts from some commonalities between two or more sets of things (analogues), and then it proceeds by inferring further commonalities. It is successful when either a) the common attributes cause or explain the inferred attributes; or b) the common and inferred attributes are both caused or explained by something else about the two analogues. It is unsuccessful when neither (a) nor (b) is true. Using the chart on offer in this section can be helpful in breaking down analogical reasoning and identifying just what is being claimed. Sometimes analyzing it like this can lay bare the faults in such forms of reasoning.
The following is a good discussion of the evaluation of analogical arguments from Matthew Knachels’ Fundamental Methods of Logic: (CC-BY-4.0 License):
The Evaluation of Analogical Arguments
Unlike in the case of deduction, we will not have to learn special techniques to use when evaluating these sorts of arguments. It’s something we already know how to do, something we typically do automatically and unreflectively. The purpose of this section, then, is not to learn a new skill, but rather subject a practice we already know how to engage in to critical scrutiny. We evaluate analogical arguments all the time without thinking about how we do it. We want to achieve a metacognitive perspective on the practice of evaluating arguments from analogy; we want to think about a type of thinking that we typically engage in without much conscious deliberation. We want to identify the criteria that we rely on to evaluate analogical reasoning—criteria that we apply without necessarily realizing that we’re applying them. Achieving such metacognitive awareness is useful insofar as it makes us more self-aware, critical, and therefore effective reasoners.
Analogical arguments are inductive arguments. They give us reasons that are supposed to make their conclusions more probable. How probable, exactly? That’s very hard to say. How probable was it that I would like The Wolf of Wall Street given that I had liked the other four Scorsese/DiCaprio collaborations? I don’t know. How probable is it that it’s wrong to eat pork given that it’s wrong to eat dogs and dolphins? I really don’t know. It’s hard to imagine how you would even begin to answer that question.
As we mentioned, while it’s often impossible to evaluate inductive arguments by giving a precise probability of its conclusion, it is possible to make relative judgments about strength and weakness. Recall, new information can change the probability of the conclusion of an inductive argument. We can make relative judgments of like this: if we add this new information as a premise, the new argument is stronger/weaker than the old argument; that is, the new information makes the conclusion more/less likely.
It is these types of relative judgments that we make when we evaluate analogical reasoning. We compare different arguments—with the difference being new information in the form of an added premise, or a different conclusion supported by the same premises—and judge one to be stronger or weaker than the other. Subjecting this practice to critical scrutiny, we can identify six criteria that we use to make such judgments.
We’re going to be making relative judgments, so we need a baseline argument against which to compare others. Here is such an argument:
Alice has taken four Philosophy courses during her time in college. She got an A in all four. She has signed up to take another Philosophy course this semester. I predict she will get an A in that course, too.
This is a simple argument from analogy, in which the future is predicted based on past experience. It fits the schema for analogical arguments: the new course she has signed up for is designated by ‘c’; the property we’re predicting it has (Q) is that it is a course Alice will get an A in; the analogues are the four previous courses she’s taken; what they have in common with the new course (P1) is that they are also Philosophy classes; and they all have the property Q—Sally got an A in each.
Anyway, how strong is the baseline argument? How probable is its conclusion in light of its premises? I have no idea. It doesn’t matter. We’re now going to consider tweaks to the argument, and the effect that those will have on the probability of the conclusion. That is, we’re going to consider slightly different arguments, with new information added to the original premises or changes to the prediction based on them, and ask whether these altered new arguments are stronger or weaker than the baseline argument. This will reveal the six criteria that we use to make such judgments. We’ll consider one criterion at a time.
Number of Analogues
Suppose we alter the original argument by changing the number of prior Philosophy courses Alice had taken. Instead of Alice having taken four philosophy courses before, we’ll now suppose she has taken 14. We’ll keep everything else about the argument the same: she got an A in all of them, and we’re predicting she’ll get an A in the new one. Are we more or less confident in the conclusion—the prediction of an A—with the altered premise? Is this new argument stronger or weaker than the baseline argument?
It’s stronger! We’ve got Alice getting an A 14 times in a row instead of only four. That clearly makes the conclusion more probable. (How much more? Again, it doesn’t matter.)
What we did in this case is add more analogues. This reveals a general rule: other things being equal, the more analogues in an analogical argument, the stronger the argument (and conversely, the fewer analogues, the weaker). The number of analogues is one of the criteria we use to evaluate arguments from analogy.
Variety of Analogues
You’ll notice that the original argument doesn’t give us much information about the four courses Alice succeeded in previously and the new course she’s about to take. All we know is that they’re all Philosophy courses. Suppose we tweak things. We’re still in the dark about the new course Alice is about to take, but we know a bit more about the other four: one was a course in Ancient Greek Philosophy; one was a course on Contemporary Ethical Theories; one was a course in Formal Logic; and the last one was a course in the Philosophy of Mind. Given this new information, are we more or less confident that she will succeed in the new course, whose topic is unknown to us? Is the argument stronger or weaker than the baseline argument?
It is stronger. We don’t know what kind of Philosophy course Alice is about to take, but this new information gives us an indication that it doesn’t really matter. She was able to succeed in a wide variety of courses, from Mind to Logic, from Ancient Greek to Contemporary Ethics. This is evidence that Alice is good at Philosophy generally, so that no matter what kind of course she’s about to take, she’ll probably do well in it.
Again, this points to a general principle about how we evaluate analogical arguments: other things being equal, the more variety there is among the analogues, the stronger the argument (and conversely, the less variety, the weaker).
Number of Similarities
In the baseline argument, the only thing the four previous courses and the new course have in common is that they’re Philosophy classes. Suppose we change that. Our newly tweaked argument predicts that Alice will get an A in the new course, which, like the four she succeeded in before, is cross-listed in the Department of Religious Studies and covers topics in the Philosophy of Religion. Given this new information—that the new course and the four older courses were similar in ways we weren’t aware of—are we more or less confident in the prediction that Alice will get another A? Is the argument stronger or weaker than the baseline argument?
Again, it is stronger. Unlike the last example, this tweak gives us new information both about the four previous courses and the new one. The upshot of that information is that they’re more similar than we knew; that is, they have more properties in common. To P1 = ‘is a Philosophy course’ we can add P2 = ‘is cross-listed with Religious Studies’ and P3 = ‘covers topics in Philosophy of Religion’. The more properties things have in common, the stronger the analogy between them. The stronger the analogy, the stronger the argument based on that analogy. We now know not just that Alice did well in not just in Philosophy classes—but specifically in classes covering the Philosophy of Religion; and we know that the new class she’s taking is also a Philosophy of Religion class. I’m much more confident predicting she’ll do well again than I was when all I knew was that all the classes were Philosophy; the new one could’ve been in a different topic that she wouldn’t have liked.
General principle: other things being equal, the more properties involved in the analogy—the more similarities between the item in the conclusion and the analogues—the stronger the argument (and conversely, the fewer properties, the weaker).
Number of Differences
An argument from analogy is built on the foundation of the similarities between the analogues and the item in the conclusion—the analogy. Anything that weakens that foundation weakens the argument. So, to the extent that there are differences among those items, the argument is weaker.
Suppose we add new information to our baseline argument: the four Philosophy courses Alice did well in before were all courses in the Philosophy of Mind; the new course is about the history of Ancient Greek Philosophy. Given this new information, are we more or less confident that she will succeed in the new course? Is the argument stronger or weaker than the baseline argument? Clearly, the argument is weaker. The new course is on a completely different topic than the other ones. She did well in four straight Philosophy of Mind courses, but Ancient Greek Philosophy is quite different. I’m less confident that she’ll get an A than I was before.
If I add more differences, the argument gets even weaker. Supposing the four Philosophy of Mind courses were all taught by the same professor (the person in the department whose expertise is in that area), but the Ancient Greek Philosophy course is taught by someone different (the department’s specialist in that topic). Different subject matter, different teachers: I’m even less optimistic about Alice’s continued success.
Generally speaking, other things being equal, the more differences there are between the analogues and the item in the conclusion, the weaker the argument from analogy.
Relevance of Similarities and Differences
Not all similarities and differences are capable of strengthening or weakening an argument from analogy, however. Suppose we tweak the original argument by adding the new information that the new course and the four previous courses all have their weekly meetings in the same campus building. This is an additional property that the courses have in common, which, as we just saw, other things being equal, should strengthen the argument. But other things are not equal in this case. That’s because it’s very hard to imagine how the location of the classroom would have anything to do with the prediction we’re making—that Alice will get an A in the course. Classroom location is simply not relevant to success in a course.1 Therefore, this new information does not strengthen the argument. Nor does it weaken it; I’m not inclined to doubt Alice will do well in light of the information about location. It simply has no effect at all on my appraisal of her chances.
Similarly, if we tweak the original argument to add a difference between the new class and the other four, to the effect that while all of the four older classes were in the same building, while the new one is in a different building, there is no effect on our confidence in the conclusion. Again, the building in which a class meets is simply not relevant to how well someone does.
Contrast these cases with the new information that the new course and the previous four are all taught by the same professor. Now that strengthens the argument! Alice has gotten an A four times in a row from this professor—all the more reason to expect she’ll receive another one. This tidbit strengthens the argument because the new similarity—the same person teaches all the courses—is relevant to the prediction we’re making—that Alice will do well. Who teaches a class can make a difference to how students do—either because they’re easy graders, or because they’re great teachers, or because the student and the teacher are in tune with one another, etc. Even a difference between the analogues and the item in the conclusion, with the right kind of relevance, can strengthen an argument. Suppose the other four philosophy classes were taught be the same teacher, but the new one is taught by a TA—who just happens to be her boyfriend. That’s a difference, but one that makes the conclusion—that Alice will do well—more probable.
Generally speaking, careful attention must be paid to the relevance of any similarities and differences to the property in the conclusion; the effect on strength varies.
Modesty/Ambition of the Conclusion
Suppose we leave everything about the premises in the original baseline argument the same: four Philosophy classes, an A in each, new Philosophy class. Instead of adding to that part of the argument, we’ll tweak the conclusion. Instead of predicting that Alice will get an A in the class, we’ll predict that she’ll pass the course. Are we more or less confident that this prediction will come true? Is the new, tweaked argument stronger or weaker than the baseline argument?
It’s stronger. We are more confident in the prediction that Alice will pass than we are in the prediction that she will get another A, for the simple reason that it’s much easier to pass than it is to get an A. That is, the prediction of passing is a much more modest prediction than the prediction of an A.
Suppose we tweak the conclusion in the opposite direction—not more modest, but more ambitious. Alice has gotten an A in four straight Philosophy classes, she’s about to take another one, and I predict that she will do so well that her professor will suggest that she publish her term paper in one of the most prestigious philosophical journals and that she will be offered a three-year research fellowship at the Institute for Advanced Study at Princeton University. That’s a bold prediction! Meaning, of course, that it’s very unlikely to happen. Getting an A is one thing; getting an invitation to be a visiting scholar at one of the most prestigious academic institutions in the world is quite another. The argument with this ambitious conclusion is weaker than the baseline argument.
General principle: the more modest the argument’s conclusion, the stronger the argument; the more ambitious, the weaker.
Refutation by Analogy
We can use arguments from analogy for a specific logical task: refuting someone else’s argument, showing that it’s bad. Recall the case of deductive arguments. To refute those—to show that they are bad, i.e., invalid—we had to produce a counterexample—a new argument with the same logical form as the original that was obviously invalid, in that its premises were in fact true and its conclusion in fact false. We can use a similar procedure to refute inductive arguments. Of course, the standard of evaluation is different for induction: we don’t judge them according to the black and white standard of validity. And as a result, our judgments have less to do with form than with content. Nevertheless, refutation along similar lines is possible, and analogies are the key to the technique.
To refute an inductive argument, we produce a new argument that’s obviously bad—just as we did in the case of deduction. We don’t have a precise notion of logical form for inductive arguments, so we can’t demand that the refuting argument have the same form as the original; rather, we want the new argument to have an analogous form to the original. The stronger the analogy between the refuting and refuted arguments, the more decisive the refutation. We cannot produce the kind of knock-down refutations that were possible in the case of deductive arguments, where the standard of evaluation—validity—does not admit of degrees of goodness or badness, but the technique can be quite effective.
Consider the following:
“Duck Dynasty” star and Duck Commander CEO Willie Robertson said he supports Trump because both of them have been successful businessmen and stars of reality TV shows.
By that logic, does that mean Hugh Hefner’s success with “Playboy” and his occasional appearances on “Bad Girls Club” warrant him as a worthy president? Actually, I’d still be more likely to vote for Hefner than Trump.[2]
The author is refuting the argument of Willie Robertson, the “Duck Dynasty” star. Robertson’s argument is something like this: Trump is a successful businessman and reality TV star; therefore, he would be a good president. To refute this, the author produces an analogous argument—Hugh Hefner is a successful businessman and reality TV star; therefore, Hugh Hefner would make a good president—that he regards as obviously bad. What makes it obviously bad is that it has a conclusion that nobody would agree with: Hugh Hefner would make a good president. That’s how these refutations work. They attempt to demonstrate that the original argument is lousy by showing that you can use the same or very similar reasoning to arrive at an absurd conclusion.
Here’s another example, from a group called “Iowans for Public Education”. Next to a picture of an apparently well-to-do lady is the following text:
“My husband and I have decided the local parks just aren’t good enough for our kids. We’d rather use the country club, and we are hoping state tax dollars will pay for it. We are advocating for Park Savings Accounts, or PSAs. We promise to no longer use the local parks. To hell with anyone else or the community as a whole. We want our tax dollars to be used to make the best choice for our family.”
Sound ridiculous? Tell your legislator to vote NO on Education Savings Accounts (ESAs), aka school vouchers.
The argument that Iowans for Public Education put in the mouth of the lady on the poster is meant to refute reasoning used by advocates for “school choice”, who say that they ought to have the right to opt out of public education and keep the tax dollars they would otherwise pay for public schools and use it to pay to send their kids to private schools. A similar line of reasoning sounds pretty crazy when you replace public schools with public parks and private schools with country clubs.
Since these sorts of refutations rely on analogies, they are only as strong as the analogy between the refuting and refuted arguments. There is room for dispute on that question. Advocates for school vouchers might point out that schools and parks are completely different things, that schools are much more important to the future prospects of children, and that given the importance of education, families should have to right choose what they think is best. Or something like that. The point is, the kinds of knock-down refutations that were possible for deductive arguments are not possible for inductive arguments. There is always room for further debate.
[1] I’m sure someone could come up with some elaborate backstory for Alice according to which the location of the class somehow makes it more likely that she will do well, but set that aside. No such story is on the table here.
[2] Austin Faulds, “Weird celebrity endorsements fit for weird election,” Indiana Daily Student, 10/12/16, http://www.idsnews.com/article/2016/...weird-election.