12.5: Functions of Language and Precision in Speech
- Page ID
Good Linguistic Habits17
Sometimes you may find that things are more complex or more elaborate than they appear to be at first. And it is often the job of reason to uncover layers of complexity behind appearances. Still, if you have two or more explanations for something, all of which are about as good as each other, the explanation you should prefer is the simplest one. This principle of simplicity in good reasoning is sometimes called Ockham’s Razor. It was first articulated by a Franciscan monk named Brother William of Ockham, who lived from 1288 to 1348. His actual words were “Entia non sunt multiplicanda sine necessitate.”18 In English, this means ‘No unnecessary repetition of identicals’. This is a fancy way of saying, ‘Well it’s possible that there are twenty-three absolutely identical tables occupying exactly the same position in space and time, but it’s much simpler to believe that there’s just one table here. So let’s go with the simpler explanation.’ Ockham’s original point was theological: he wanted to explain why monotheism is better than polytheism. It’s simpler to assume there’s one infinite God, than it is to assume there are a dozen or more. Ockham’s idea has also been applied to numerous other matters, from devising scientific theories to interpreting poetry, film, and literature. Other ways to express this idea go like this: “All other things being equal, the simplest explanation tends to be the truth”, and “The best explanation is the one which makes the fewest assumptions.”
There are a lot of words in every language that have more than one meaning. This is a good thing: it allows us more flexibility of expression; it is part of what makes poetry possible; and so on. But for the purpose of reasoning as clearly and as systematically as possible, it is important to use our words very carefully. This usually means avoiding metaphors, symbols, rhetorical questions, weasel words, euphemisms, tangents, equivocations, and ‘double speak’. When building a case for why something is true, or something else is not true, and so on, it is important to say exactly what one means, and to eliminate ambiguities as much as possible. The simplest way to do this is to craft good definitions. A definition can be imprecise in several ways; here are some of them.
• Too broad: it covers more things than it should.
Example of a broad definition: “All dogs are four-legged animals.” (Does that mean that all four-legged animals are dogs?)
• Too narrow: it covers too few things.
Example of a narrow definition: “All tables are furniture pieces placed in the dining rooms of houses and used for serving meals.” (Does that mean that tables in other rooms used for other purposes are not ‘true’ tables?’)
• Circular: the word being defined, or one of its closest synonyms, appears in the definition itself.
Example of a Circular definition: “Beauty is that which a given individual finds beautiful.” (This actually tells us nothing about what beauty is.)
• Too vague: The definition doesn’t really say much at all about what is being defined, even though it looks like it does.
Example of a vague definition: “Yellowism is not art or anti-art. Examples of Yellowism can look like works of art but are not works of art. We believe that the context for works of art is already art.”19 (And I don’t know what this means at all.)
Good philosophical thinking takes time. Progress in good critical thinking is often very slow. The process of critical thinking can’t be called successful if it efficiently maximizes its inputs and outputs in the shortest measure of time: we do not produce thoughts in the mind like widgets in a factory. The reason for this is because good critical thinking often needs to uncover that which subtle, hard to discern at first, and easy to overlook. I define subtlety as ‘a small difference or a delicate detail which takes on greater importance the more it is contemplated.’ As a demonstration, think of how many ways you can utter the word ‘Yes’, and mean something different every time. This also underlines the importance of precision, as a good thinking habit. As another example: think of how the colour planes in a painting by Piet Mondrian, such as his ‘Composition with Yellow, Blue, and Red’ have squares of white framed by black lines, but none of the white squares are exactly the same shade of white. You won’t notice this if you look at the painting for only a few seconds, or if you view a photo of the painting on your computer screen, and your monitor’s resolution isn’t precise enough to render the subtle differences. But it is the job of reason to uncover those subtleties and lay them out to be examined directly. And the search for those subtleties cannot be rushed.
When we looked at what a world view is, we defined it as ‘the sum of a set of related answers to the most important questions in life’. It’s important that one’s world view be consistent: that your answers to the big questions generally cohere well together, and do not obviously contradict each other. Inconsistent thinking usually leads to mistakes, and can produce the uncomfortable feeling of cognitive dissonance. And it can be embarrassing, too. If you are more consistent, you might still make mistakes in your thinking. But it will be a lot easier for you to identify those mistakes, and fix them. Consistency also means staying on topic, sticking to the facts, and following an argument to its conclusion. Obviously it can be fun to explore ideas in a random, wandering fashion. But as one’s problems grow more serious, it becomes more important to stay the course. Moreover, digressing too far from the topic can also lead you to commit logical fallacies such as Straw Man (creating an easy-to-defeat argument that no one actually believes) and Red Herring (deliberately diverting people away from the topic at hand).
Imprecision: Vagueness and Ambiguity20
Vagueness: You know, that one thing that someone did once .
Vincinal the midden, you keck – right? Yeah, of course! You do agree with me, don’t you?
So do you know what that means? If you don’t, but you agreed with me anyway, then fell victim to obfuscation. Obfuscation is intentionally making something confusion, often with the intention of confusing the listener. In that case, I was saying something and trying to pressure you into agreeing with me. Perhaps a doctor uses medical jargon so that she doesn’t have to explain what is actually going – and that you won’t ask. Don’t obfuscate. It violates my primary class rule. By the way, “Vincinal the midden, you keck” means “Near the garbage heap, you vomit.” Yes, it’s English.
Obfuscation is just one example of unclarity. Unclarity is often the sign of unclear thoughts, so we’re going to our best to avoid being unclear for this reason. There are many types of unclarity out there, so we’ll examine them in the hope that you avoid them.
Vague terms are mostly imprecise. Like if I say, “I have a lot of M&Ms”, it doesn’t really tell you how many. Or if I say, “I’ll fix the car next month,” you don’t really have a good idea of
when that will happen. Many vagueness problems can be resolved by simply defining the vague terms more precisely, or avoiding them altogether.
There is an interesting fallacy-related paradox called a “sorites paradox” or “paradox of the heap”. Here is an example of one:
- Someone with 1 finger does not have a lot of fingers.
- If someone with 1 finger does not have a lot of fingers, someone with 2 fingers does not have a lot of fingers.
- Therefore, someone with 2 fingers does not have a lot of fingers.
- If someone with 2 fingers does not have a lot of fingers, then someone with 3 fingers does not have a lot of fingers.
- Therefore, someone with 3 fingers does not have a lot of fingers.
…Therefore someone with 1,000,000,000 fingers does not have a lot of fingers
What are the problems with this line of reasoning? Well, it’s definitely unsound, because someone with 1 billion fingers certainly has a lot of them. It’s a paradox because there’s a clear contradiction in what we believe and no clear way out of it: the logic seems to make sense, but the conclusion is clearly wrong. So how do we resolve this paradox? Probably say that anything more than 11 fingers is a lot and leave it at that. However, what if we did this argument instead?
- 1,000,000 grains of sand is a heap.
- If 1,000,000 grains of sand is a heap, then 999,999 grains of sand is a heap.
- Therefore, 999,999 grains of sand is a heap.
…Therefore 1 grain of sand is a heap.
How do we get out of this one? Maybe say you know a heap when you see it? There is no clear resolution to the difficulties here (but you can read more at Wikipedia if you want: http://en.Wikipedia.org/wiki/Sorites_paradox ), so you just need to be aware of issues like this. I bring up sorites paradoxes because there is an important set of fallacies related to them. These are known as slippery slopes. Fallacies are when one uses a particular method of reasoning that generally goes wrong. The methods employed by most fallacies are just ways of reasoning or making arguments, and they are not necessarily automatically wrong (well, some formal fallacies can’t be correct) but many informal fallacies use a line of reasoning that might actually be correct. Oftentimes, these lines of reasoning are incorrect when you encounter them. Slippery slopes are a prime example of a fallacy that people use and generally result in faulty conclusions, but may sometimes actually result in a strong argument if it’s done carefully in the right contexts.
The following is an example of a slippery slope:
Because we cannot draw a clear line between when someone is bald and is not bald, we cannot say that anyone is bald.
You can see how this is similar to a sorites paradox and goes a little further by claiming because we can’t distinguish between traits when something changes gradually, we can’t distinguish at all.
All slippery slopes have this general format: A makes sense, but A leads to B, and B is clearly wrong, so we should reject A. In the case of baldness, you are not bald if you have 120,000 hairs and if you are not bald if you have 120,000 hairs, then you are not bald if you have 119,999. Makes sense, right? But if this is true, then (via a sorites) if you have 1 hair, you are not bald. So, we should reject the idea that you are not bald if you have 5,000 hairs – which means if you have a full head of hair, you are bald.
In a slippery slope, you are worried about gradually sliding into a problem. These examples we have seen are examples of conceptual slippery slopes, which have 2 general assumptions:
1) We should not draw a distinction between things that are not significantly different
2) If A is not significantly different than B and B is not significantly different than C, then A is not significantly different than C
The real question to ask is this:
When do small differences make big differences?
Slippery slopes are common in ethics, and depending upon how you view things, they can be either a good argument or a terrible one. One kind can be called a “fairness slippery slopes” and awarding grades gives an excellent example:
If a person with 90% deserves an A, then it’s only fair to give someone with 89% an A since there is no significant difference between their percentage in the class….If a person with 50% deserves an A, then a person with 49% deserves an A…
There are also causal slippery slope arguments, like:
Although assisted suicide (allowing someone, such as a doctor, to help someone else commit suicide) in certain cases is OK, we shouldn’t allow it because it will lead to euthanasia (a person humanely ending the life of another person that is suffering and requests it), and that’s not OK.
This can be a good argument if you think euthanasia really is a problem AND allowing assisted suicide will really lead to it. It is a fallacy if you incorrectly believe, or do not have reasons to support, the idea that assisted suicide WILL lead to euthanasia. There are many other examples of causal slippery slopes in which someone doesn’t want to allow one thing they might like because they’re worried it will lead to something they don’t: gay marriage (the worry that it might lead to polygamy), military involvement (if we get into Syria, we will have to get into Iran), raises (If I give you a raise, I’ll have to give everyone else a raise), etc. They have a specific format:
A is probably acceptable, but A leads to be B and B is not acceptable, so we shouldn’t allow A
The important questions to ask are:
Is the result really bad? Is the result really very likely? Does this bad result outweigh the benefits of the proposal?
College students make tasty snacks! Think about if this is said at a college bake sale or if it is said by a giant in the afternoon after playing a long game of squash the college dorms. The
sentence is ambiguous because you don’t know exactly what I mean since it can be understood in different ways. There are two kinds of ambiguity:
Semantic ambiguity: ambiguous words
“I like football” (which football?)
“Let’s all go to the shower” (the wedding shower, pervert)
Syntactic ambiguity: amphiboly, or vague grammar
“How to get money out of politics” (people don’t spend money on it or how you can get money out of being a politician?)
“The conquest of the Persians” (were they conquesting or being conquested?)
We can clarify an ambiguous statement by giving more information:
Mary had a little lamb…
It followed her to school
And then some broccoli
The primary ways of fixing the problems of ambiguity are:
1) Distinguish ambiguous things
2) Restate things clearly
3) Reevaluate what is being said (in case you are actually wrong)