6.4: Creating Counterarguments
- Page ID
- 36179
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)A counterargument to argument S is always an argument for why the conclusion of S is not true.
a. true b. false
──── [171]
When it comes to issues that cannot be settled by scientific investigation, the proofs or arguments take on a different character from the sort we’ve been examining. The reasons for you to favor a social policy, for example, will more often appeal to the values you hold than to the facts about the world. Values cannot be detected with voltmeters, nor measured with meter sticks. The rest of this chapter will consider arguments that appeal to the values held by the participants in the argument.
Sometimes when we respond to an argument that attacks our own position we try to expose weaknesses; for example, we might point out that the reasons given are not true, or that the reasons don't make the case even if they are true, and so forth. At other times we don't directly attack the argument but rather create a new argument for the opposite conclusion. This indirect response is called "creating a counterargument." By successfully using the techniques of argument criticism or of creating a counterargument, our position lives on to bury our enemy, the undertaker. In courts of law, these techniques are called rebuttal. The point of rebuttal is to turn our vaguely felt objections into convincing arguments.
What follows in the next few pages is an example of the give-and-take in argumentation. The exchange contains arguments, counterarguments, criticisms of arguments, criticisms of criticisms, and revisions of criticisms. The issue is whether utilitarianism is the proper way to decide right from wrong. Utilitarianism is the ethical viewpoint based on the following claim:
UTILITARIAN CLAIM: Among the possible actions a person can take in a situation, the right action is that which produces the most overall happiness for everyone.
John Stuart Mill, a nineteenth-century English philosopher, argued for utilitarianism. He wanted an antidote to the practice of English judges and legislators making social and ethical decisions on the basis of how the decision "felt" to them. Mill wanted the decision method to be more scientific. He hoped that in principle the decision could be the result of a process of calculation of the positive and negative consequences for each possible choice. The alternative that produced the maximum number would be the one to choose. His utilitarianism is a kind of cost-benefit analysis that focuses not on the benefits to a company or special group but to society as a whole, to all people.
ARGUMENT FOR UTILITARIANISM: We all have or should have feelings of generalized benevolence, of caring in general for our fellow human beings. Utilitarianism expresses these feelings. In addition, most all of the actions that utilitarianism says are immoral are in fact believed to be immoral by most people; and most of the actions that utilitarianism says are moral are generally believed to be moral, so utilitarianism coincides with most of what we already believe. In fact, utilitarianism agrees so well that it provides the most coherence to our chaotic ethical beliefs. Because we want this coherence, we should adopt utilitarianism and accept the consequences of adopting it—namely, that if utilitarian reasoning declares some action to be immoral, then even though we intuitively believe the action to be moral, we must revise our intuitions to be in agreement with what utilitarianism says.
The American philosopher William James did not accept this line of reasoning. He argued against utilitarianism. James's objection arose from his belief that it would be immoral to demand that a single individual suffer even if that suffering were to promote the overall happiness of everyone else. He based his reason on the immediate moral feeling of revulsion we'd have if we thought about the special situation he had in mind. He said:
COUNTERARGUMENT: Utilitarianism implies that trading off someone's pain to achieve the greater good of everyone else is acceptable. Yet it is really unacceptable, because of our moral feelings of revulsion at running roughshod over the dignity of that one individual.
Here is a simpler version of James's counterargument. Imagine yourself on a wagon train of settlers moving westward in the U.S. in 1850. You are attacked by a gang of outlaws. Circling your wagons, you prepare to defend yourself against an overwhelming force. You count your few guns and bullets and realize you are in a desperate situation. Just then the outlaw leader makes an offer. He promises to let the wagon train pass through to Oregon provided you will hand over the daughter of your wagon master. Otherwise, he says, his gang will attack and kill you all. You happen to know that this gang of outlaws has a tradition of keeping its promises. So if they get the daughter, the rest of you will likely make it through unharmed. You also know that the daughter will likely face unspeakable horrors. What do you do? The utilitarian will say to give her up. She has just one life, but the rest of the wagon train has many more lives to be saved; it's a matter of cost-benefit analysis. In this imaginary scenario, William James would argue that trading the girl for the greater good of the wagon train would be morally abhorrent. It wouldn't be right to do this to her, regardless of the consequences to the wagon train. Therefore, utilitarianism leads to immorality, and it cannot be the proper basis of moral reasoning. End of counterargument.
When faced with an argument against one's own position, a person often strikes back, getting defensive. The utilitarian John Stuart Mill might have responded with, "Well, it's easy for you to criticize; you don't have to face the consequences of these important decisions on a day-to-day basis." This remark is the kind of thing we say when we want to discount the force of someone else's argument. Empress Catherine the Great used the same tactic against eighteenth-century Enlightenment philosophers who were criticizing her social policies toward the Russian peasants; she said, "It's easy for you to talk. You write on paper, but I write on human flesh."
In the following passage, the defender doesn't get defensive; instead the utilitarian offers a more substantial criticism that tries to undermine the main points made in James's counterargument. This new argument is based on two reasons: (1) there is no need for us to pay much attention to James's feelings of revulsion, and (2) we do make trade-offs of people's lives all the time without calling it immoral.
CRITICISM OF THE COUNTERARGUMENT: Moral feelings are very strong, but this does not prevent them from appearing as irrational taboos to those who do not share our conventions. This should warn us against the tendency to make ethical philosophy an apology or justification of the conventional customs that happen to be established. Suppose that someone were to offer our country a wonderfully convenient mechanism, but demand in return the privilege of killing thousands of our people every year. How many of those who would indignantly refuse such a request will condemn the use of the automobile now that it is already with us? [172]
The point of the criticism is to say that if you accept the trade-off for the automobile, then you should accept the trade-off of a lonely soul's pain for the greater good of everyone else. If so, then utilitarianism has not been shown to lead to absurdity, as James mistakenly supposed. Therefore, the counterargument fails, and the argument for utilitarianism stands.
Let's review the flow of the discussion so far. The issue is the truth or falsity of utilitarianism. James's position is that utilitarianism is incorrect. His counterargument depends on his evaluation of the example of the lost soul on the far-off edge of things. The criticism of James's counterargument goes like this. James's situation with the lost soul is analogous to the situation of people being killed by the automobile, and just as it would have been OK to proceed with automobile construction, so it would be okay to send that lost soul to the far-off edge of things. So, utilitarianism is correct.
At this point, James could counterattack by criticizing the analogy between torturing individuals and killing them in car accidents. He might say this:
CRITICISM OF THE CRITICISM OF THE COUNTERARGUMENT: The analogy between the torture and the automobile situation isn't any good. The two situations differ in a crucial respect. My situation with the far-off lost soul requires intentional harm. The lost soul did not voluntarily give up his right not to be tortured; if he did, that would have been praiseworthy on his part, but he didn't. Instead he was seized against his will. But automobiles were introduced into our society voluntarily with no intention to harm anybody. The car manufacturers didn't build cars with the goal of "killing thousands of our people every year." They set out to make money and provide society with efficient transportation. In my situation, one person is singled out for harm. In the automobile situation, each person runs an approximately equal risk of accidental harm. The lost soul is not free to refuse, and his pain is foreseen. But any particular automobile driver is free not to drive, and his pain is unpredictable. So the analogy breaks down. My point stands.
The person who originally criticized the counterargument now makes a change in light of the criticism of his criticism.
REVISION OF THE CRITICISM OF THE COUNTERARGUMENT: Maybe the analogy with automobiles does break down, but the motivation behind it is still correct and can show what is wrong with James's counterargument against utilitarianism. We often consider it moral to trade off some people's pain against the greater good of everyone else even when the pain is intentionally inflicted and even when those who receive it are not free to refuse it. U.S. Immigration and Naturalization requires people coming into this country to suffer the pain of certain vaccinations for the good of the rest of us who don't want to catch foreign diseases. There is no universal sense of revulsion about such situations. Most people think it is the right thing to do, being the lesser of two evils. It is understandable why individuals look out for themselves and don't choose to do what is in the interest of all society, but that doesn't make what they do morally right, does it? So, utilitarianism is the proper viewpoint on ethics after all.
Well, we won't crown a victor in this dispute about utilitarianism. There are many more moves and countermoves that might occur before the issue is settled. The issue is still an open one in the field of philosophy. However, the discussion does demonstrate the give-and-take that occurs in a serious dispute.
Briefly state a counterargument to the following argument:
Communism is better than capitalism because communism is designed to promote cooperation among equals whereas capitalism is designed to promote competition, greed, and the domination of one person by another.
──── [173]
Let’s now try to handle all at once many of the points made about argumentation.
Consider the following debate, which contains a series of arguments, criticisms of arguments, counterarguments, revisions, clarifications, arguments in defense of previous assumptions, and so forth. The main issue is whether robots could someday think, (a) Where does the first clarification occur? (b) Where does the first criticism occur? (c) Where is the first counterargument? (d) Which side should win this debate? (e) What is the most convincing point made for the winning side?
A.
First Person: A robot could never think. You can tell that right away, just by thinking about what the words mean.
Second Person: Are you suggesting that a robot cannot think because thinking and robot are conflicting terms the way round and triangular are?
B.
First Person: No, I mean even if some future robot appeared to think, the real thinking would belong to its programmer. A robot couldn't think on its own.
Second Person: When the robot walks, you don't say it's really the programmer who is doing the walking, do you?
C.
First Person: No, of course not.
Second Person: Robots can think because they do all sorts of things that we used to say required thinking. They play chess, for example, though not the way we do.
D.
First Person: They play chess, but they don't think when they play it. Robots cannot think on their own because robots can only do what they are programmed to do.
Second Person: OK, program it to think.
E.
First Person: But you can't do that.
Second Person: Why not? I hope you don't answer by saying, "because robots can only do what they are programmed to do." That would be circular reasoning.
F.
First Person: Thinking requires flexibility in the sense that one can change one's thoughts. But since robots can't change the thoughts that are programmed into them, they cannot really think either.
Second Person: A robot could change its thought without changing its programming, just as a chess-playing computer changes its moves without changing its programming.
G.
First Person: My point about change is that a thinking being must be capable of original thought, but a robot can do only what it is programmed to do.
Second Person: Couldn't a chess-playing computer come up with a chess move that had never been played before and that would surprise even its programmer? That move would be as original for it as our thoughts are for us. Besides, isn't an original human thought just a surprising thought that is actually only the product of the person's genetic code plus his or her life's conditioning?
H.
First Person: No, an original thought is uncaused.
Second Person: If it's uncaused, then it is just random. Surely, good thinking isn't just random mumbling, is it? If you tell me it is uncaused physically but instead caused by our intent, I don't understand that.
I.
First Person: I wouldn't be so quick to write off intentions, but look, a thinking being must be able to handle itself in new situations. There are too many new situations for them to be explicitly handled in advance by the programmer; the task is just too large. So a robot couldn't ever really think.
Second Person: There are an endless number of new addition and multiplication problems, yet a small machine can handle them, can't it?
J.
First Person: Because of individual growth as well as the growth of our species itself, we're all the end products of our entire history, but human history cannot be written down and programmed into a computer. We all know a lot that we can't say, or can't write down in notation. We know it implicitly in our flesh and blood, so to speak. A computer knows only what can be written down.
Second Person: I disagree. Ok, so you know something you can't write down. You know how to ride a bicycle but can't write down the details. It doesn't follow that the details can't be written down just because you can't write them down. Someone else can write them down and use them to permit a robot to ride a bicycle, too. Besides, why are you making such a big point about being made of flesh and blood? You can add, and you are made of hydrocarbon; a calculator can add, and it's made of copper and silicon. The stuff can't be that important.9
K.
First Person: We carbon-based beings really know what we are doing when we add; the calculator doesn't.
Second Person: I agree, but you're overestimating "stuff?' If living organisms are composed of molecules that are intrinsically inanimate and unintelligent, why is it that living, conscious matter differs so radically from unconscious nonliving matter? They both are made of molecules. The answer must be the ways those molecules are put together, because the essence of life and thinking is not substance but instead complex patterns of information. Life and thinking are properties of the way stuff is organized. If you organized the stuff correctly, then life and thought might exist in most any substance, whether it be flesh or silicon chips.
L.
First Person: It isn't really a question of “stuff" nor of programming. It is more a question of essence. We are essentially of a different nature. Thinking beings have souls, but robot computers do not.
Second Person: If a machine were built with sufficient ingenuity, couldn't God give it a soul?
M.
First Person: Yes, God could, but God wouldn't.
Second Person: How do you know what God would do?
──── [174]