Skip to main content
Humanities LibreTexts

15.2.7: Confirming by Testing

  • Page ID
    36296
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    To prove your hypothesis about tuna scientifically, you would need to run some tests. One test would be to eat the tuna again and see whether it causes the symptoms again. That sort of test might be dangerous to your health. Here is a better test: acquire a sample of the tuna and examine it under a microscope for bacteria known to cause the symptoms you had.

    Suppose you do not have access to the tuna. What can you do? You might ask other people who ate the tuna: "Did you get sick, too?" Yes answers would make the correlation more significant. Suppose, however, you do not know anybody to ask. Then what? The difficulty now is that even if you did eat tuna before you got your symptoms, was that the only relevant difference? You probably also ate something else, such as french fries with catsup. Could this have been the problem instead? You would be jumping to conclusions to blame the tuna merely on the basis of the tuna eating being followed by the symptoms; that sort of jump commits the post hoc fallacy. At this point you simply do not have enough evidence to determine the cause of your illness.

    Let's reexamine this search for the cause, but at a more general level, one that will provide an overview of how science works in general. When scientists think about the world in order to understand some phenomenon, they try to discover some pattern or some causal mechanism that might be behind it. They try out ideas the way the rest of us try on clothes in a department store. They don't adopt the first idea they have, but instead are willing to try a variety of ideas and to compare them.

    Suppose you, a scientist, have uncovered what appears to be a suspicious, unexplained correlation between two familiar phenomena, such as vomiting and tuna eating. Given this observed correlation, how do you go about explaining it? You have to think of all the reasonable explanations consistent with the evidence and then rule out as many as you can until the truth remains. One way an explanation is ruled out when you collect reliable data inconsistent with it. Another way is if you notice that the explanation is inconsistent with accepted scientific laws. If you are unable to refute the serious alternative explanations, you will be unable to find the truth; knowledge of the true cause will elude you. This entire cumbersome process of searching out explanations and trying to refute them is called the scientific method of justifying a claim. There is no easier way to get to the truth. People have tried to take shortcuts by gazing into crystal balls, taking drugs, or contemplating how the world ought to be, but those methods have turned out to be unreliable.

    Observation is passive; experimentation is active. Experimentation is a poke at nature. It is an active attempt to create the data needed to rule out a hypothesis. Unfortunately, scientists often cannot test the objects they are most interested in. For example, experimenters interested in whether some potential drug might be harmful to humans would like to test humans but must settle for animals. Scientists get into serious disputes with each other about whether the results of testing on rats, rabbits, and dogs carry over to humans. This dispute is really a dispute about analogy; is the animal's reaction analogous to the human's reaction?

    Scientists often collect data from a population in order to produce a general claim about that population. The goal is to get a representative sample, and this goal is more likely to be achieved if the sample size is large, random, diverse, and stratified. Nevertheless, nothing you do with your sampling procedure will guarantee that your sample will be representative. If you are interested in making some claim about the nature of polar bears, even capturing every living polar bear and sampling it will not guarantee that you know the characteristics of polar bears that roamed the Earth 2,000 years ago. Relying on background knowledge about the population's lack of diversity can reduce the sample size needed for the generalization, and it can reduce the need for a random sampling procedure. If you have well-established background knowledge that electrons are all alike, you can run your experiment with any old electron; don't bother getting Egyptian electrons as well as Japanese electrons.


    This page titled 15.2.7: Confirming by Testing is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Bradley H. Dowden.

    • Was this article helpful?