Skip to main content
Humanities LibreTexts

5.2: Cogency and Strong Arguments

  • Page ID
    29610
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Strength and Weakness

    Inductive arguments are said to be either strong or weak. There’s no absolute cut-off between strength and weakness, but some arguments will be very strong and others very weak, so the distinction is still useful even if it is not precise. A strong argument is one where, if the premises were true, the conclusion would be very likely to be true. A weak argument is one where the conclusion does not follow from the premises (i.e. even if the premises were true, there would still be a good chance that the conclusion could be false.)

    Most arguments in courts of law attempt to be strong arguments; they are generally not attempts at valid arguments.

    So, the following example is a strong argument.

    John was found with a gun in his hand, running from the apartment where Tom's body was found. Three witnesses heard a gunshot right before they saw John run out. The gun in John's possession matched the ballistics on the bullet pulled from Tom's head. John had written a series of threatening letters to Tom. In prison, John confessed to his cellmate that he had killed Tom. Therefore, John is the murder of Tom.

    Given that all the premises were true, it would be very likely that the conclusion would be true.

    Importantly, strength has nothing to do with the actual truth of the premises!

    This is something people frequently forget, so it’s worth repeating: A STRONG ARGUMENT NEEDN’T HAVE ANY TRUE PREMISES! ALL THE PREMISES OF A STRONG ARGUMENT CAN BE FALSE!

    The argument is strong because: if the premises WERE true, the conclusion would be likely to be true.

    So the following arguments are strong:

    98% of Dominicans have superpowers. Lucy is Dominican. I saw Lucy leap from the top of a tall building last week and walk away unscathed. Lucy has superpowers.

    People from the lost continent of Atlantis have been manipulating the world’s governments for years by placing Atlantean wizards in positions of power. Whenever possible, they place an Atlantean wizard in the executive position of the most powerful government on earth. They did this in the Roman empire, the Mongol empire, and the British empire. Currently, the United States is the most powerful country on earth. Barack Obama was born in Hawai’i, where about 45% of the people are actually Atlanteans. While he was a Senator from Illinois, he received over 10 billion dollars in funds from a mysterious holding company called “Atlantis Incorporated.” Several journalists claim that they have seen Barack Obama perform feats of magic. For example, Shep Smith of Fox News said he saw Barack Obama walk on water. Barack Obama is clearly an Atlantean wizard.

    Two leading researchers in genetics have found that, in every sample of DNA they looked at, there were traces of kryptonite. They examined 1600 samples, from 1600 separate individuals, including an equal distribution from all continents. The results were then replicated in another, larger study of 2700 samples, also taken from all continents. We conclude, then, that normal DNA contains kryptonite.

    Cogency: If an argument is strong and all its premises are true, the argument is said to be cogent.

    The following arguments are weak. The premises provide little, if any, evidence for the conclusions:

    I saw your boyfriend last night and he was talking to another girl. So he’s cheating on you.

    Senator Bonham served 8 years in military, whereas his opponent, Mr. Malham never served one day of military service. So you should vote for Senator Bonham.

    More people buy Juff ™ brand peanut butter than any other brand, so you should by Juff ™!

    It’s notable, again, that the truth of the premises is irrelevant. A weak argument can have true premises and a true conclusion. What makes it weak is that the premises do not provide good reason to believe the conclusion.

    Induction45

    All of the argument forms we have looked at so far have been deductively valid. That meant, we said, that the conclusion follows from necessity if the premises are true. But to what extent can we ever be sure of the truth of those premises? Inductive argumentation is a less certain, more realistic, more familiar way of reasoning that we all do, all the time. Inductive argumentation recognizes, for instance, that a premise like “All horses have four legs” comes from our previous experience of horses. If one day we were to encounter a three-legged horse, deductive logic would tell us that “All horses have four legs” is false, at which point the premise becomes rather useless for a deducer. In fact, deductive logic tells us that if the premise “All horses have four legs” is false, even if we know there are many, many four-legged horses in the world, when we go to the track and see hordes of four-legged horses, all we can really be certain of is that “There is at least one four-legged horse.”

    Inductive logic allows for the more realistic premise, “The vast majority of horses have four legs”. And inductive logic can use this premise to infer other useful information, like “If I’m going to get Chestnut booties for Christmas, I should probably get four of them.” The trick is to recognize a certain amount of uncertainty in the truth of the conclusion, something for which deductive logic does not allow. In real life, however, inductive logic is used much more frequently and (hopefully) with some success. Let’s take a look at some of the uses of inductive reasoning.

    Predicting the Future

    We constantly use inductive reasoning to predict the future. We do this by compiling evidence based on past observations, and by assuming that the future will resemble the past. For instance, I make the observation that every other time I have gone to sleep at night, I have woken up in the morning. There is actually no certainty that this will happen, but I make the inference because of the fact that this is what has happened every other time. In fact, it is not the case that “All people who go to sleep at night wake up in the morning”. But I’m not going to lose any sleep over that. And we do the same thing when our experience has been less consistent. For instance, I might make the assumption that, if there’s someone at the door, the dog will bark. But it’s not outside the realm of possibility that the dog is asleep, has gone out for a walk, or has been persuaded not to bark by a clever intruder with sedative-laced bacon. I make the assumption that if there’s someone at the door, the dog will bark, because that is what usually happens.

    Explaining Common Occurrences

    We also use inductive reasoning to explain things that commonly happen. For instance, if I’m about to start an exam and notice that Bill is not here, I might explain this to myself with the reason that Bill is stuck in traffic. I might base this on the reasoning that being stuck in traffic is a common excuse for being late, or because I know that Bill never accounts for traffic when he’s estimating how long it will take him to get somewhere. Again, that Bill is actually stuck in traffic is not certain, but I have some good reasons to think it’s probable. We use this kind of reasoning to explain past events as well. For instance, if I read somewhere that 1986 was a particularly good year for tomatoes, I assume that 1986 also had some ideal combination of rainfall, sun, and consistently warm temperatures. Although it’s possible that a scientific madman circled the globe planting tomatoes wherever he could in 1986, inductive reasoning would tell me that the former, environmental explanation is more likely. (But I could be wrong.)

    Generalizing

    Often we would like to make general claims, but in fact it would be very difficult to prove any general claim with any certainty. The only way to do so would be to observe every single case of something about which we wanted to make an observation. This would be, in fact, the only way to prove such assertions as, “All swans are white”. Without being able to observe every single swan in the universe, I can never make that claim with certainty. Inductive logic, on the other hand, allows us to make the claim, with a certain amount of modesty.

    Inductive Generalization

    Inductive generalization allows us to make general claims, despite being unable to actually observe every single member of a class in order to make a certainly true general statement. We see this in scientific studies, population surveys, and in our own everyday reasoning. Take for example a drug study. Some doctor or other wants to know how many people will go blind if they take a certain amount of some drug for so many years. If they determine that 5% of people in the study go blind, they then assume that 5% of all people who take the drug for that many years will go blind. Likewise, if I survey a random group of people and ask them what their favourite colour is, and 75% of them say “purple”, then I assume that purple is the favourite colour of 75% of people. But we have to be careful when we make an inductive generalization. When you tell me that 75% of people really like purple, I’m going to want to know whether you took that survey outside a Justin Bieber concert.

    Let’s take an example. Let’s say I asked a class of 400 students whether or not they think logic is a valuable course, and 90% of them said yes. I can make an inductive argument like this:

    (P1) 90% of 400 students believe that logic is a valuable course.

    (C) Therefore 90% of students believe that logic is a valuable course.

    There are certain things I need to take into account in judging the quality of this argument. For instance, did I ask this in a logic course? Did the respondents have to raise their hands so that the professor could see them, or was the survey taken anonymously? Are there enough students in the course to justify using them as a representative group for students in general?

    If I did, in fact, make a class of 400 logic students raise their hands in response to the question of whether logic is valuable course, then we can identify a couple of problems with this argument. The first is bias. We can assume that anyone enrolled in a logic course is more likely to see it as valuable than any random student. I have therefore skewed the argument in favour of logic courses. I can also question whether the students were answering the question honestly. Perhaps if they are trying to save the professor’s feelings, they are more likely to raise their hands and assure her that the logic course is a valuable one.

    Now let’s say I’ve avoided those problems. I have assured that the 400 students I have asked are randomly selected, say, by soliciting email responses from randomly selected students from the university’s entire student population. Then the argument looks stronger.

    Another problem we might have with the argument is whether I have asked enough students so that the whole population is well-represented. If the student body as a whole consists of 400 students, my argument is very strong. If the student body numbers in the tens of thousands, I might want to ask a few more before assuming that the opinions of a few mirror those of the many. This would be a problem with my sample size.

    Let’s take another example. Now I’m going to run a scientific study, in which I will pay someone $50 to take a drug with unknown effects and see if it makes them blind. In order to control for other variables, I open the study only to white males between the ages of 18 and 25.

    A bad inductive argument would say:

    (P1) 40% of 1000 people who took the drug went blind.

    (C) Therefore 40% of people who take the drug will go blind.

    A better inductive argument would make a more modest claim:

    (P1) 40% of the 1000 people who took the drug went blind.

    (C) Therefore 40% of white males between the ages of 18 and 25 who take the drug will go blind.

    The point behind this example is to show how inductive reasoning imposes an important limitation on the possible conclusions a study or a survey can make. In order to make good generalizations, we need to ensure that our sample is representative, non-biased, and sufficiently sized.

    Statistical Syllogism

    Where in an inductive generalization we saw statement expressing a statistic applied to a more general group, we can also use statistics to go from the general to the particular. For instance, if I know that most computer science majors are male, and that some random individual with the androgynous name “Cameron” is an computer science major, then we can be reasonably certain that Cameron is a male. We tend to represent the uncertainty by qualifying the conclusion with the word “probably”. If, on the other hand, we wanted to say that something is unlikely, like that Cameron were a female, we could use “probably not”. It is also possible to temper our conclusion with other similar qualifying words.

    Let’s take an example.

    (P1) Of the 133 people found guilty of homicide last year in Canada, 79% were jailed.

    (P2) Socrates was found guilty of homicide last year in Canada.

    (C) Therefore, Socrates was probably jailed.

    In this case we can be reasonably sure that Socrates is currently rotting in prison. Now the certainty of our conclusion seems to be dependent on the statistics we’re dealing with. There are definitely more certain and more uncertain cases.

    (P1) In the last election, 50% of voting Americans voted for Obama, while 48% voted for Romney.

    (P2) Jim is a voting American.

    (C) Therefore, Jim probably voted for Obama.

    Clearly, this argument is not as strong as the first. It is only slightly more likely than not that Jim voted for Obama. In this case we might want to revise our conclusion to say:

    (C) Therefore, it is slightly more likely than not that Jim voted for Obama.

    In other cases, the likelihood that something is or is not the case approaches certainty. For example:

    (P1) There is a 0.00000059% chance you will die on any single flight, assuming you use one of the most poorly rated airlines.

    (P2) I’m flying to Paris next week.

    (C) There’s more than a million to one chance that I will die on my flight.

    Note that in all of these examples, nothing is ever stated with absolute certainty. It is possible to improve the chances that our conclusions will be accurate by being more specific, or finding out more information. We would know more about Jim’s voting strategy, for instance, if we knew where he lived, his previous voting habits, or if we simply asked him for whom he voted (in which case, we might also want to know how often Jim lies).

    Induction by Shared Properties

    Induction by shared properties involves noting the similarity between two things with respect to their properties, and inferring from this that they may share other properties.

    A familiar example of this is how a company might recommend products to you based on other customers’ purchases. Amazon.com tells me, for instance, that customers who bought the complete Sex and the City DVD series also bought Lipstick Jungle and Twilight.

    Assuming that people buy things because they like them, we can rephrase this as:

    (P1) There are a large number of people who, if they like Sex and the City and Twilight, will also like Lipstick Jungle.

    I could also make the following observation:

    (P2) I like Sex and the City and Twilight.

    And then infer from there two premises that:

    (C) I would also like Lipstick Jungle.

    And I did. In general, induction by shared properties assumes that if something has properties w, x, y, and z, and if something else has properties w, x, and y, then it’s reasonable to assume that that something else also has property z. Note that in the above example all of the properties were actually preferences with regard to entertainment. The kinds of properties involved in the comparison can and will make an argument better or worse. Let’s consider a worse induction.

    (P1) Lisa is tall, has blonde hair, has blue eyes, and rocks out to Nirvana on weekends.

    (P2) Gina is tall, has blonde hair, and has blue eyes.

    (C) Therefore Gina probably rocks out to Nirvana on weekends.

    In this case the properties don’t seem to be related in the same way as in the first example. While the first three are physical characteristics, the last property instead indicates to us that Lisa is stuck in a 90’s grunge phase. Gina, though she shares several properties with Lisa, might not share the same undying love for Kurt Cobain. Let’s try a stronger argument.

    (P1) Bob and Dick both wear plaid shirts all the time, wear large plastic-rimmed glasses, and listen to bands you’ve never heard of.

    (P2) Bob drinks PBR.

    (C) Dick probably also drinks PBR.

    Here we can identify the qualities that Bob and Dick have in common as symptoms of hipsterism. The fact that Bob drinks PBR is another symptom of this affectation. Given that Dick is exhibiting most of the same symptoms, the idea that Dick would also drink PBR is a reasonable assumption to make.

    Practical Uses

    A procedure very much like Induction by Shared Properties is performed by nurses and doctors when they diagnose a patient’s condition. Their thinking goes like this:

    (P1) Patients who have elephantitus display an increased heart rate, elevated blood pressure, a rash on their skin, and a strong desire to visit the elephant pen at the zoo.

    (P2) The patient here in front of me has an increased heart rate, elevated blood pressure, and a strong desire to visit the elephant pen at the zoo.

    (C) It is probable, therefore, that the patient here in front of me has elephantitus.

    The more that a patient’s symptoms match the ‘textbook definition’ of a given disease, then the more likely it is that the patient has that disease. Caregivers then treat the patient for the

    disease that they think the patient probably has. If the disease doesn’t respond to the treatment, or the patient starts to present different symptoms, then they consider other conditions with similar symptoms that the patient is likely to have.

    Induction by Shared Relations

    Induction by shared relations is much like induction by shared properties, except insofar that what is shared are not properties, but relations. A simple example is the causal relation, from which we might make an inductive argument like this:

    (P1) Percocet, Oxycontin and Morphine reduce pain, cause drowsiness, and may be habit forming.

    (P2) Heroin also reduces pain and causes drowsiness.

    (C) Heroin is probably also habit forming.

    In this case the effects of reducing pain, drowsiness, and addiction are all assumed to be caused by the drugs listed. We can use an induction by shared relation to make the probable conclusion that if heroin, like the other drugs, reduces pain and causes drowsiness, it is probably also habit forming.

    Another interesting example are the relations we have with other people. For instance, Facebook knows everything about you. But let’s focus on the “friends with” relation. They compare who your friends are with the friends of your friends in order to determine who else you might actually know. The induction goes a little like this:

    (P1) Donna is friends with Brandon, Kelly, Steve, and Brenda.

    (P2) David is friends with Brandon, Kelly, and Steve.

    (C) David probably also knows Brenda.

    We could strengthen that argument if we knew that Brandon, Kelly, Steve, and Brenda were all friends with each other as well. We could also make an alternate conclusion based on the same argument above:

    (C) David probably also knows Donna.

    They do, after all, know at least three of the same people. They’ve probably run into each other at some point.


    This page titled 5.2: Cogency and Strong Arguments is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Noah Levin (NGE Far Press) .