In high school you were probably introduced to something misleading called the scientific method. According to this picture of science, science proceeds by asking a question, formulating a hypothesis, designing an experiment to test the hypothesis, and analyzing the results to reach a conclusion. The experiment should be repeatable and the hypothesis is only considered well supported if our experimentation yields plenty of data in support of it. When we find plenty of data supporting our hypotheses, the pattern of reasoning employed is basic induction by enumeration where we generalize or predict based on observed patterns.
While this model does describe a frequently employed method in science, it’s misleading to think of this as the scientific method. The disservice done to the actual practice of science by this bit of high school curriculum is really quite egregious. It’s as if you were shown how to play a C major scale on the piano and then told “there you go, that’s how to make music. That’s themethod.” In actual practice, scientists employ a variety of methods that involve a broad range of patterns of reasoning, both inductive and deductive. Testing hypotheses often involves things like hunting for clues, diagnosing the reasons of unexpected results, engineering new ways of detecting evidence, and a great many things beyond designing experiments and generalizing based on the results of these. The support for a hypothesis is often a matter of inference to the best explanation rather than inductive generalization. Sometimes the best analysis of data seeks alternative explanations for data anomalies that do not fit with predictions rather than automatically counting such data as evidence against a hypothesis.
Investigating the messy, gritty details that drive actual scientific practice is where the real action in the philosophy of science is today. Explaining how science advances human understanding of the world often requires a close examination of what’s going on in actual scientific practice. It is not uncommon for philosophers of science to describe their work as something like the science of science. Methods are not to be prescribed up front by the philosophical lords of epistemology. Rather, in contemporary philosophy of science we look to science to see what methods actually work, and then try to better understand the significance of these.
Over the past few chapters we have covered a couple of classic skeptical problems. In the wake of Descartes and Hume you might worry that we can’t know much at all. Out of intellectual laziness, lots of people are willing to just let the matter rest there and think we can only have so many subjective opinions, even about scientific matters (witness, for instance, the response of many people to deniers of climate science). It’s hard, however, to take this uncritical skepticism seriously in the face of the truly impressive achievements of science over the past few centuries. Looking at these achievements, is seems we have pretty powerful evidence for our ability to figure things out and attain knowledge and understanding. So, the suggestion I want to make at the outset of this chapter is that the way to address the skeptical problems raised by Hume might be to examine more closely the methods by which we seem to attain knowledge and begin to sort out how they work in practice. In this chapter we will trace a few developments over the course of the 20th century with an eye to better understanding how the philosophy of science has developed into what it is today. We will start with Logical Positivism, a broad empiricist movement of the early 20th century.
Logical Positivism can be understood as Empiricism, heavily influenced by Hume, and supercharged with powerful new developments in symbolic logic. The system of logic that we now teach in college level symbolic logic courses (PHIL& 120 at BC) was developed just over a century ago in the work of Gotlob Frege, Bertrand Russell, and Albert North Whitehead for the purpose of better understanding the foundations of mathematics. In Principia Mathematica, Russell and Whitehead made a strong case for analyzing all of mathematics in terms of logic (together with set theory). According to the argument of Principia Mathematica, mathematical truths are not truths justified independent of experience by the light of reason alone. Rather they are derivable from logic and set theory alone. Merely logical truths are trivial in the sense that they tell us nothing about the nature of the world. Any sentence of the form ‘Either P or not P’, for instance, is a basic logical truth. But, like all merely logical truths, sentences having this form assert nothing about how the world is. Logic doesn’t constitute knowledge of the world, it is merely a tool for organizing knowledge and maintaining consistency.
Mathematics had long served as the rationalist’s paradigm case of knowledge justified through reason alone. So we can make a powerful case for Empiricism by showing that math is really just an extension of logic. It remains debatable whether Frege, Russell, and Whitehead succeeded in showing this, but their attempt, and especially the powerful new system of logic they developed in making this attempt, constituted a powerful blow against Rationalism and inspired a group of empirically minded philosophers and scientists in Vienna to employ the same logical tools in analyzing and clarifying philosophical issues in science. As we will see, their ambitions were even grander since they also argued that much of what was going on in philosophy at the time was literally meaningless.
We will consider three central projects taken on by the Positivists in developing their Empiricist view of scientific knowledge. These are the demarcation problem, the problem of distinguishing science from non-science, developing a view about what a scientific theory is, and giving an account of scientific explanation. The Positivists utilize the resources of symbolic logic in each of these projects.
The Demarcation Problem
Among the main tasks the Positivists set for themselves was that of distinguishing legitimate science from other rather suspect fields and methods of human inquiry. Specifically, they wanted to distinguish science from religion, metaphysics, and pseudo-science like astrology.
19th century German metaphysics involved attempts to reason about such obscure notions as“the absolute,” or the nature of “the nothing.” Such metaphysics needed to be distinguished from genuine science. We had also seen appeal to obscure empirically suspicious entities and forces in Aristotelian science such as the “vital force” to explain life, or the “dormative virtue” a mysterious power of substances like opium to cause sleep. Such mysterious forces needed to be eliminated from genuine scientific discourse.
While metaphysics and talk of obscure forces in science were to be distinguished from genuine science, the Positivists needed to preserve a role for unobservable theoretical entities like atoms and electrons. The rejection of metaphysics and obscure forces must not undermine the legitimate role for theoretical entities.
The Positivists employed Empiricism in their proposed solution to the demarcation problem. Empiricism, as we know, is just the view that our sense experience is the ultimate source of justification for all of our factual knowledge of the world. The Positivists extend Empiricism to cover not just the justification of knowledge, but the meaningfulness of language as well. That is, they take the source of all meaning to ultimately be our sense experience. Only meaningful statements can be true or false. So, only statements whose meaning can ultimately be given in observational terms can be true or false. Theoretical terms like “atom” refer to things we can’t directly observe. But talk about such theoretical entities could be made empirically respectable by means of observational tests for when theoretical terms are being appropriately applied. Electrical charge, for instance, is not itself observable. But we can define theoretical terms in terms of observational tests for determining whether the term applies. So we might say that a thing is in a state of electrical charge if it registers voltage when electrodes are attached and hooked up to a voltage meter. Similarly, though you don’t directly observe the state of charge of a battery, you can easily carry out a test in observational terms by putting the battery in a flashlight and seeing if it lights up.
This doctrine about meaning was called the Verificationist Theory of Meaning (VTM). The Verificationist Theory of Meaning has it that a sentence counts as meaningful only if we can specify the observable conditions under which it would count as true or false. This view can then be used to distinguish empirically respectable language from nonsense. Legitimate scientific discourse must count as meaningful on the Verifiability Theory of Meaning. So we have a view on which science is distinguished as meaningful while pseudo-science, religion, poetry etc. are, strictly speaking, meaningless. Likewise, most of philosophy turns out to be meaningless as well. Not only will obscure 19th century German metaphysics turn out to be meaningless, but talk of free will, immaterial substances, and all of ethics will likewise turn out to be meaningless. The only legitimate role left for philosophers, according to the Logical Postivitists, will be the logical analysis of scientific discourse. Being meaningless, religion, pseudo science, most of philosophy, literature etc. is neither true nor false. While these things cannot be true or false, according to Positivists’ criteria for meaningfulness, they may provide helpful expressions of human emotions, attitudes towards life, etc. That is, poetry, literature, religion, and most philosophy will be merely so much comforting or disturbing babble, mere coos, squeals, or screams.
Significant progress is made by paying close attention to the meaningfulness of scientific discourse. But the Verificationist Theory of Meaning eventually falls apart for a number of reasons including that it turns out not to be meaningful according to its own criteria. Amusingly, we can’t provide an empirical test of truth or falsity for the claim that a claim is meaningful only if we can provide an empirical test for its truth or falsity. That is, according to the VerificationistTheory of Meaning, the term “meaning” turns out to be meaningless. Logical Positivism remained a powerful influence in philosophy through much of the 20th century and it did serve to weed out some pretty incomprehensible metaphysics. But I can now happily report that other important areas of philosophy, notably ethics and metaphysics, have recovered from the Positivists’ assault on philosophy from within.
Understanding the Logical Positivist view of theories requires that we say a few things about formal languages. The symbolic logic developed in Russell and Whitehead’s Principia Mathematica is a formal language. Computer languages are also formal languages. A formal language is a precisely specified artificial language. A formal language is specified by doing three things:
identify the languages vocabulary.
identify what counts as a well formed expression of that language.
give axioms or rules of inference that allow you to transforming certain kinds of well formed expressions into other kinds of well formed expressions.
Scientific theories are formal languages according to the Positivists. We can understand what this means be considering the component parts of a scientific theory and how these map on to the elements of formal languages just given. A theory consists of the formal language of first order predicate logic with quantifiers (the logic developed first by Frege and then in greater detail by Russell and Whitehead) supplemented with observational vocabulary, correspondence rules that define theoretical terms in terms of observational vocabulary, and statements of laws like Galileo’s laws of motion, Newton’s law of universal gravitation etc. All of the non-logical vocabulary of a scientific theory is definable in observational terms. Well formed expressions in scientific discourse will be only those expressible in terms of formal logic plus the vocabulary of science. The rules of inference in scientific discourse consist only of the rules of inference of logic and math plus scientific laws.
The Logical Postivist’s view of what a theory is has since been deemed overly formalized. There are numerous legitimate theories in science that can’t be rendered in a formal system. Consider theories in anthropology or geology for instance. Nevertheless, the idea of a theory as a formal system is a powerful one and it remains the gold standard in many sciences. Linguistics has “gone computational” in recent years, for instance. The most ambitious scientific undertaking in all of human history, the science of climate change, also aims to render theory and explanation in formal systems through massive and intricately detailed computer models of climate change. In fact, roughly speaking, we can consider a theory formalizable when it can be comprehensively modeled on a computer. Computer programs are paradigm examples of formal systems.
A further more general lesson we might take from the Positivist’s view of theories addresses a very commonplace misunderstanding of what a theory is. People commonly think of theories as just claims that lie on a scale of certainty being somewhat more certain than guesses or hypotheses, but rather less certain than established matters of fact. This is really a terrible misunderstanding of what a theory is. It is commonly invoked in fallacious attempts to discredit science, as when people dismiss evolution or climate change science as “just a theory.” Such comments reveal a basic misunderstanding of what theory is. For something to count as a theory has nothing to do with our level of certainty in its truth. Many scientific theories are among the best established scientific knowledge we have. A few years ago, for instance, some scientist claimed to have observed a particle in a particle accelerator travelling faster than the speed of light. It made the news and caused a bit of excitement. But those in the know, those who understand Einstein’s special relativity and the full weight of the evidence in support of it patiently waited for the inevitable revelation that some clocks had been mis-calibrated. Einstien’s special relativity is right and we know this with about as much certainty as we can know anything. In the other direction, there are lots of genuine theories that we know full well to be false. Aristotle’s physics would be one example. Having very much or very little confidence in something has nothing to do with whether it is properly called a theory.
So if it’s not about our degree of confidence, what does make something a theory? What makes something a theory is that it provides a general framework for explaining things. The Positivists didn’t discover this, but their idea of a theory as a formal system illustrates the idea nicely. Theories generally consist of a number of logically interconnected principles that can be mutually employed to explain and predict a range of observable phenomenon. Bear this in mind as we consider the Positivist’s view of scientific explanation.
According to the Deductive Nomological model of explanation developed by the Logical Positivist, Carl Hempel, a scientific explanation has the form of a deductively valid argument. The difference between an argument and an explanation is just their respective purposes. Formally, arguments and explanations look alike. But the purpose of an explanation is to shed light on something we accept as true, while the purpose of an argument is to give us a reason for thinking something is true. Given this difference in purpose, we call the claim that occupies the place of the conclusion the explanandum (it’s the fact to be explained), and the claims that occupy the place of the premises the explanans (these are the claims that, taken together, provide the explanation). In a scientific explanation, the explanans will consist of laws and factual claims. The factual claims in conjunction with the laws will deductively entail the explanandum.
For example, consider this explanation for why a rock falls to the earth:
1. F = GM1M2/r2, Newton’s law of universal gravitation which tells us that massive bodies experience a force of mutual attraction that is proportionate to their mass and inversely proportionate to the distance between them.
2. F=MA. This is the force law, which tells us that force equals mass times acceleration.
3. The rock has mass of 1 Kg.
4. The earth has a mass of 5.97219 × 1024 kilograms.
5. The rock was released within the gravitational field of the earth.
6. No forces prevented the rock from falling to the earth.
7. The rock fell to the earth.
Recall that deductive logic is part of every theory, every explanatory framework. The first two claims in this explanation are statements of law from Newtonian physics. The remaining four are statements of fact. Taken together, these six claims deductively entail the explanadum, that the rock fell to the earth. This should illustrate how theories function as explanatory frameworks.
One very useful thing Hempel’s account of explanation does is alert us to the argument-like structure of developed explanations. The basic idea here is that a complete explanation should include all of the facts involved in making the fact to be explained true. These will include both particular facts relevant to the specific fact we want explained and general principles (scientific laws in the case of scientific explanations) that belong to a broader framework for explanation. A fully developed explanation reveals a logical relationship between the fact we want to explain, other relevant facts and connecting principles like laws of nature.
Hempel’s account of explanation faced a number of problems that have helped to refine our understanding of scientific explanation. We won’t address them here except to mention one because it’s amusing. Consider this explanation:
Men who take birth control pills do not get pregnant.
Bruce is a man and he takes birth control pills.
Bruce is not pregnant.
This seems to meet all of the positivist’s criteria for being an explanation. But aside from being silly, it’s at least not a very good explanation for why Bruce is not pregnant. Problem cases like this suggest that purely formal accounts of explanation like Hempel’s will fall short in sorting which facts are relevant in an explanation.
There is also a more general lesson I’d like you to take from the positivist’s account of explanation. For your entire career as a student you’ve been asked to explain things, but odds are nobody has ever really explained what it means to explain something. Personally, I don’t think I had ever given a thought to what an explanation was until I encountered the Deductive Nomological account in a Philosophy of Science class. But now you’ve been introduced to a model of explanation. You may not find it fully applicable to every academic situation you encounter. But if you try to make use of it by thinking of explanations as having a developed argument like structure, you might find grades in many of your classes improving significantly.
We mentioned at the outset that Logical Positivism was very much influenced by Hume’s Empiricism. You will recall that Hume argued for some surprising skeptical results. The Logical Positivists adopted one of two strategies for dealing with this. On some issues it was argued that Hume’s skeptical conclusions were acceptable, while on others Hume’s skepticism was regarded as a problem yet to be solved. As an example of the first strategy, Bertrand Russell, though not a Logical Positivist himself, wrote an influential paper in which he argued that science can proceed as usual without any reference to the notion of causation. Skepticism about necessary causal connections was deemed not to be problematic. Skepticism about induction was more difficult to accept. So the early 20th century saw a variety of sometimes colorful but generally unsuccessful attempts to resolve the problem of induction. And this brings us to Karl Popper.