# 1.9: “… if and only if …”, Using Theorems

- Page ID
- 16862

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## 9. “… if and only if …”, Using Theorems

## 9.1 A historical example

The philosopher David Hume (1711-1776) is remembered for being a brilliant skeptical empiricist. A person is a skeptic about a topic if that person both has very strict standards for what constitutes knowledge about that topic and also believes we cannot meet those strict standards. Empiricism is the view that we primarily gain knowledge through experience, particular experiences of our senses. In his book, An Inquiry Concerning Human Understanding, Hume lays out his principles for knowledge, and then advises us to clean up our libraries:

When we run over libraries, persuaded of these principles, what havoc must we make? If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.^{[11]}

Hume felt that the only sources of knowledge were logical or mathematical reasoning (which he calls above “abstract reasoning concerning quantity or number”) or sense experience (“experimental reasoning concerning matter of fact and existence”). Hume is led to argue that any claims not based upon one or the other method is worthless.

We can reconstruct Hume’s argument in the following way. Suppose t is some topic about which we claim to have knowledge. Suppose that we did not get this knowledge from experience or logic. Written in English, we can reconstruct his argument in the following way:

We have knowledge about t if and only if our claims about t are learned from experimental reasoning or from logic or mathematics.

Our claims about t are not learned from experimental reasoning.

Our claims about t are not learned from logic or mathematics.

_____

We do not have knowledge about t.

What does that phrase “if and only if” mean? Philosophers think that it, and several synonymous phrases, are used often in reasoning. Leaving “if and only” unexplained for now, we can use the following translation key to write up the argument in a mix of our propositional logic and English.

P: We have knowledge about t.

Q: Our claims about t are learned from experimental reasoning.

R: Our claims about t are learned from logic or mathematics.

And so we have:

P if and only if (QvR)

¬Q

¬R

_____

¬P

Our task is to add to our logical language an equivalent to “if and only if”. Then we can evaluate this reformulation of Hume’s argument.

## 9.2 The biconditional

Before we introduce a symbol synonymous with “if and only if”, and then lay out its syntax and semantics, we should start with an observation. A phrase like “P if and only if Q” appears to be an abbreviated way of saying “P if Q and P only if Q”. Once we notice this, we do not have to try to discern the meaning of “if and only if” using our expert understanding of English. Instead, we can discern the meaning of “if and only if” using our already rigorous definitions of “if”, “and”, and “only if”. Specifically, “P if Q and P only if Q” will be translated “((Q→P)^(P→Q))”. (If this is unclear to you, go back and review section 2.2.) Now, let us make a truth table for this formula.

P | Q | (Q → P) | (P → Q) | ((Q→P)^(P→Q)) |

T | T | T | T | T |

T | F | T | F | F |

F | T | F | T | F |

F | F | T | T | T |

We have settled the semantics for “if and only if”. We can now introduce a new symbol for this expression. It is traditional to use the double arrow, “↔”. We can now express the syntax and semantics of “↔”.

If Φ and Ψ are sentences, then

(Φ↔Ψ)

is a sentence. This kind of sentence is typically called a “biconditional”.

The semantics is given by the following truth table.

Φ | Ψ | (Φ↔Ψ) |

T | T | T |

T | F | F |

F | T | F |

F | F | T |

One pleasing result of our account of the biconditional is that it allows us to succinctly explain the syntactic notion of logical equivalence. We say that two sentences Φ and Ψ are “equivalent” or “logically equivalent” if (Φ↔Ψ) is a theorem.

## 9.3 Alternative phrases

In English, it appears that there are several phrases that usually have the same meaning as the biconditional. Each of the following sentences would be translated as (P↔Q).

P if and only if Q.

P just in case Q.

P is necessary and sufficient for Q.

P is equivalent to Q.

## 9.4 Reasoning with the biconditional

How can we reason using a biconditional? At first, it would seem to offer little guidance. If I know that (P↔Q), I know that P and Q have the same truth value, but from that sentence alone I do not know if they are both true or both false. Nonetheless, we can take advantage of the semantics for the biconditional to observe that if we also know the truth value of one of the sentences constituting the biconditional, then we can derive the truth value of the other sentence. This suggests a straightforward set of rules. These will actually be four rules, but we will group them together under a single name, “equivalence”:

(Φ↔Ψ)

Φ

_____

Ψ

and

(Φ↔Ψ)

Ψ

_____

Φ

and

(Φ↔Ψ)

¬Φ

_____

¬Ψ

and

(Φ↔Ψ)

¬Ψ

_____

¬Φ

What if we instead are trying to show a biconditional? Here we can return to the insight that the biconditional (Φ↔Ψ) is equivalent to ((Φ→Ψ)^(Ψ→Φ)). If we could prove both (Φ→Ψ) and (Ψ→Φ), we will know that (Φ↔Ψ) must be true.

We can call this rule “bicondition”. It has the following form:

(Φ→Ψ)

(Ψ→Φ)

_____

(Φ↔Ψ)

This means that often when we aim to prove a biconditional, we will undertake two conditional derivations to derive two conditionals, and then use the bicondition rule. That is, many proofs of biconditionals have the following form:

## 9.5 Returning to Hume

We can now see if we are able to prove Hume’s argument. Given now the new biconditional symbol, we can begin a direct proof with our three premises.

We have already observed that we think (QvR) is false because ¬Q and ¬R. So let’s prove ¬(QvR). This sentence cannot be proved directly, given the premises we have; and it cannot be proven with a conditional proof, since it is not a conditional. So let’s try an indirect proof. We believe that ¬(QvR) is true, so we’ll assume the denial of this and show a contradiction.

Hume’s argument, at least as we reconstructed it, is valid.

Is Hume’s argument sound? Whether it is sound depends upon the first premise above (since the second and third premises are abstractions about some topic t). Most specifically, it depends upon the claim that we have knowledge about something just in case we can show it with experiment or logic. Hume argues we should distrust—indeed, we should burn texts containing—claims that are not from experiment and observation, or from logic and math. But consider this claim: we have knowledge about a topic t if and only if our claims about t are learned from experiment or our claims about t are learned from logic or mathematics.

Did Hume discover this claim through experiments? Or did he discover it through logic? What fate would Hume’s book suffer, if we took his advice?

## 9.6 Some examples

It can be helpful to prove some theorems that make use of the biconditional, in order to illustrate how we can reason with the biconditional.

Here is a useful principle. If two sentences have the same truth value as a third sentence, then they have the same truth value as each other. We state this as (((P↔Q)^(R↔Q))→(P↔R)). To illustrate reasoning with the biconditional, let us prove this theorem.

This theorem is a conditional, so it will require a conditional derivation. The consequent of the conditional is a biconditional, so we will expect to need two conditional derivations, one to prove (P→R) and one to prove (R→P). The proof will look like this. Study it closely.

We have mentioned before the principles that we associate with the mathematician Augustus De Morgan (1806-1871), and which today are called “De Morgan’s Laws” or the “De Morgan Equivalences”. These are the recognition that ¬(PvQ) and (¬P^¬Q) are equivalent, and also that ¬(P^Q) and (¬Pv¬Q) are equivalent. We can now express these with the biconditional. The following are theorems of our logic:

(¬(PvQ)↔(¬P^¬Q))

(¬(P^Q)↔(¬Pv¬Q))

We will prove the second of these theorems. This is perhaps the most difficult proof we have seen; it requires nested indirect proofs, and a fair amount of cleverness in finding what the relevant contradiction will be.

## 9.7 Using theorems

Every sentence of our logic is, in semantic terms, one of three kinds. It is either a tautology, a contradictory sentence, or a contingent sentence. We have already defined “tautology” (a sentence that must be true) and “contradictory sentence” (a sentence that must be false). A contingent sentence is a sentence that is neither a tautology nor a contradictory sentence. Thus, a contingent sentence is a sentence that might be true, or might be false.

Here is an example of each kind of sentence:

(Pv¬P)

(P↔¬P)

P

The first is a tautology, the second is a contradictory sentence, and the third is contingent. We can see this with a truth table.

P | ¬P | (Pv¬P) | (P↔¬P) | P |

T | F | T | F | T |

F | T | T | F | F |

Notice that the negation of a tautology is a contradiction, the negation of a contradiction is a tautology, and the negation of a contingent sentence is a contingent sentence.

¬(Pv¬P)

¬(P↔¬P)

¬P

P |
¬P |
(Pv¬P) |
¬(Pv¬P) |
(P↔¬P) |
¬(P↔¬P) |

T | F | T | F | F | T |

F | T | T | F | F | T |

A moment’s reflection will reveal that it would be quite a disaster if either a contradictory sentence or a contingent sentence were a theorem of our propositional logic. Our logic was designed to produce only valid arguments. Arguments that have no premises, we observed, should have conclusions that must be true (again, this follows because a sentence that can be proved with no premises could be proved with any premises, and so it had better be true no matter what premises we use). If a theorem were contradictory, we would know that we could prove a falsehood. If a theorem were contingent, then sometimes we could prove a falsehood (that is, we could prove a sentence that is under some conditions false). And, given that we have adopted indirect derivation as a proof method, it follows that once we have a contradiction or a contradictory sentence in an argument, we can prove anything.

Theorems can be very useful to us in arguments. Suppose we know that neither Smith nor Jones will go to London, and we want to prove, therefore, that Jones will not go to London. If we allowed ourselves to use one of De Morgan’s theorems, we could make quick work of the argument. Assume the following key.

P: Smith will go to London.

Q: Jones will go to London.

And we have the following argument:

This proof was made very easy by our use of the theorem at line 2.

There are two things to note about this. First, we should allow ourselves to do this, because if we know that a sentence is a theorem, then we know that we could prove that theorem in a subproof. That is, we could replace line 2 above with a long subproof that proves (¬(P v Q)↔(¬P ^ ¬Q)), which we could then use. But if we are certain that (¬(P v Q)↔(¬P ^ ¬Q)) is a theorem, we should not need to do this proof again and again, each time that we want to make use of the theorem.

The second issue that we should recognize is more subtle. There are infinitely many sentences of the form of our theorem, and we should be able to use those also. For example, the following sentences would each have a proof identical to our proof of the theorem (¬(P v Q)↔(¬P ^ ¬Q)), except that the letters would be different:

(¬(R v S) ↔ (¬R ^ ¬S))

(¬(T v U) ↔ (¬T ^ ¬U))

(¬(V v W) ↔ (¬V ^ ¬W))

This is hopefully obvious. Take the proof of (¬(P v Q)↔(¬P ^ ¬Q)), and in that proof replace each instance of P with R and each instance of Q with S, and you would have a proof of (¬(R v S)↔(¬R ^ ¬S)).

But here is something that perhaps is less obvious. Each of the following can be thought of as similar to the theorem (¬(P v Q)↔(¬P ^ ¬Q)).

(¬((P^Q) v (R^S))↔(¬(P^Q) ^ ¬(R^S)))

(¬(T v (Q v V))↔(¬T ^ ¬(Q v V))

(¬((Q↔P) v (¬R→¬Q))↔(¬(Q↔P) ^ ¬(¬R→¬Q)))

For example, if one took a proof of (¬(P v Q)↔(¬P ^ ¬Q)) and replaced each initial instance of P with (Q↔P) and each initial instance of Q with (¬R→¬Q), then one would have a proof of the theorem (¬((Q↔P) v (¬R→¬Q))↔(¬(Q↔P) ^ ¬(¬R→¬Q))).

We could capture this insight in two ways. We could state theorems of our metalanguage and allow that these have instances. Thus, we could take (¬(Φ v Ψ) ↔ (¬Φ ^ ¬Ψ)) as a metalanguage theorem, in which we could replace each Φ with a sentence and each Ψ with a sentence and get a particular instance of a theorem. An alternative is to allow that from a theorem we can produce other theorems through substitution. For ease, we will take this second strategy.

Our rule will be this. Once we prove a theorem, we can cite it in a proof at any time. Our justification is that the claim is a theorem. We allow substitution of any atomic sentence in the theorem with any other sentence if and only if we replace each initial instance of that atomic sentence in the theorem with the same sentence.

Before we consider an example, it is beneficial to list some useful theorems. There are infinitely many theorems of our language, but these ten are often very helpful. A few we have proved. The others can be proved as an exercise.

T1 (P v ¬P)

T2 (¬(P→Q) ↔ (P^¬Q))

T3 (¬(P v Q) ↔ (¬P ^ ¬Q))

T4 ((¬P v ¬Q) ↔ ¬(P ^ Q))

T5 (¬(P ↔ Q) ↔ (P ↔ ¬Q))

T6 (¬P → (P → Q))

T7 (P → (Q → P))

T8 ((P→(Q→R)) → ((P→Q) → (P→R)))

T9 ((¬P→¬Q) → ((¬P→Q) →P))

T10 ((P→Q) → (¬Q→¬P))

Some examples will make the advantage of using theorems clear. Consider a different argument, building on the one above. We know that neither is it the case that if Smith goes to London, he will go to Berlin, nor is it the case that if Jones goes to London he will go to Berlin. We want to prove that it is not the case that Jones will go to Berlin. We add the following to our key:

R: Smith will go to Berlin.

S: Jones will go to Berlin.

And we have the following argument:

Using theorems made this proof much shorter than it might otherwise be. Also, theorems often make a proof easier to follow, since we recognize the theorems as tautologies—as sentences that must be true.

## 9.8 Problems

- Prove each of the following arguments is valid.
- Premises: P, ¬Q. Conclusion: ¬(P↔Q).
- Premises: (¬PvQ), (Pv¬Q). Conclusion: (P↔Q).
- Premises: (P↔Q), (R↔S) . Conclusion: ((P^R)↔(Q^S)).

- Prove each of the following theorems.
- T1
- T2
- T5
- T6
- T7
- T8
- T9
- ((P^Q)↔¬(¬Pv¬Q))
- ((P→Q)↔¬(P^¬Q))

- In normal colloquial English, write your own valid argument with at least two premises, and with a conclusion that is a biconditional. Your argument should just be a paragraph (not an ordered list of sentences or anything else that looks formal like logic). Translate it into propositional logic and prove it is valid.

[11] From Hume’s Enquiry Concerning Human Understanding, p.161 in Selby-Bigge and Nidditch (1995 [1777]).