# 8.4: Rules of Replacement

- Page ID
- 223905

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

The final rules are fairly simple (in fact, they’re *simpler* than the rules of implication), but work in a very different way than the rules of implication. They’re called the rules of *replacement* because they allow you to simply transform or replace a formula (or subformula) with a logically equivalent formula.

If you look above at the rules of implication, you’ll notice that they have a sort of “premise-conclusion” format. If you find the premises, then you can derive the conclusion.

Rules of replacement have a different structure because each is simply a statement that two formulas are logically equivalent. So instead of going in one direction, we can move in **either direction** in the rule. Here’s an almost ridiculously simple rule of replacement:

*~~P :: P *

The first thing to note is that little square of dots. That’s what’s called a “metalogical” symbol, but you don’t have to worry too much about that. What it means for our purposes is simple: you can replace these with each other (i.e.^{1} they are logically equivalent). A :: B means “you can replace A with B and B with A. The first difference between rules of replacement and rules of implication, therefore, is that rules of replacement are **bidirectional**—one can move from the left to the right or from the right to left. With the rules of implication, we were only allowed to move *down* from the premises to the conclusion—we can’t move *up* from the conclusion back to the premises. That wouldn’t be a valid move. The rules of replacement work in both ways.

This means we can derive from *any formula* the double-negated form of that formula and vice versa. Put another way, we can *remove two negations* or add two negations to any formula we want. See how simple some of these rules can be?

Let’s introduce the rest of the rules, and then talk a bit more about differences between rules of replacement and rules of implication. I use Greek Letters *Delta*, *Ohm*, and *Phi* to help get us into the habit of thinking of these as *general patterns* rather than as specific formulas with just single letters in place of the disjuncts, conjuncts, antecedents, consequents, etc.

\( \neg ( \Delta \bullet \Omega) :: (\neg \Delta \vee \neg \Omega)\)

\( \neg ( \Delta \vee \Omega) :: (\neg \Delta \bullet \neg \Omega)\)

In short: “distributing” a negation the way one might distribute an exponent in math (or, for that matter “undsitributing” as one does moving from right to left in these rules) results in flipping an “and” to an “or” and vice versa.

\( ( \Delta \vee \Omega) :: (\Omega \vee \Delta)\)

\( ( \Delta \bullet \Omega) :: (\Omega \bullet \Delta )\)

In short: you can switch conjuncts with one another and disjuncts with one another. Note: only works with conjunction (\(\bullet\) or \(\wedge\)) and disjunction (\(\vee\)). It doesn’t work with any other connectives.

\([ ( \Delta \vee (\Omega \vee \Phi )] :: [(\Delta \vee \Omega ) \vee \Phi ]\)

\([ ( \Delta \bullet ( \Omega \bullet \Phi )] :: [(\Delta \bullet \Omega ) \bullet \Phi ]\)

In short: if you have two adjacent disjunctions or conjunctions, you may move the parentheses to *associate* the two letters that are not being associated in the base formula. Note: only works with conjunction (*\(\bullet\)* or \(\wedge\)) and disjunction (\(\vee\)). It doesn’t work with any other connectives.

\( \neg \neg \Phi :: \Phi \)

In short: you can add or remove two **immediately adjacent** negations to a formula or subformula. Note: negations cannot be separated, so this doesn’t work: ~(~Z\(\rightarrow\)X) // (Z\(\rightarrow\)X) [invalid!]

\(( \Phi \rightarrow \Omega) :: (\neg \Omega \rightarrow \neg \Phi ) \)

In short: you can switch antecedent and consequent as long as you add one negation to each or take away one negation from each. Note: must be combined with DN if you want to add one negation and take one negation away using Trans.

\(( \Phi \rightarrow \Omega ) :: (\neg \Phi \vee \Omega )\)

In short: you can change an implication into a disjunction as long as you negate the antecedent. You can also replace a disjunction with at least one negated formula with an implication, so long as the left disjunct is negated. If you do, removed *one* negation from the left disjunct.

Another important difference between rules of replacement and rules of implication is that rules of replacement can be used on *subformulas*. A subformula is any formula that is part of another formula. “(A\(\rightarrow\)B)” is a formula, so it’s a subformula of #1 below. “P \(\leftrightarrow\) \(\neg\)” is not a well-formed formula, so it’s not a subformula of #3 below. I can apply a rule of replacement to only *part* of a formula. So the following derivation is deductively valid even though all of the changes happen to only part of the premise and subsequent formulas:

1. P \(\leftrightarrow\) (A\(\rightarrow\)B) / P \(\leftrightarrow\) \(\neg\)(A \(\wedge\) \(\neg\)B)

2. P \(\leftrightarrow\) (\(\neg\)A\(\vee\)B) 1, Impl

3. P \(\leftrightarrow\) \(\neg\)\(\neg\)(\(\neg\)A \(\vee\) B) 2, DN

4. P \(\leftrightarrow\) \(\neg\)(\(\neg\)\(\neg\)A \(\wedge\)\(\neg\)B) 3, DM

5. P \(\leftrightarrow\) \(\neg\)(A \(\wedge\) \(\neg\)B) 4, DN

If we know that each of the replaced subformulas are logically equivalent, then (since equivalence is transitive), we also know that all of them will be materially equivalent (biconditional) to P (if, that is, proposition 1 is true).

To summarize: rules of replacement are **bidirectional** and can be applied to **subformulas** (parts of formulas); whereas rules of implication are **unidirectional** and apply only to **whole Well-Formed Formulas/WFFs. **

## Stacking rules

Let’s discuss another revision: your instructor may be okay with you stacking a couple of rules on one line. For instance, you may be able to throw a Trans and DN together to add a negation to one part and remove one from another. The important caveat is that you must cite the rules in order to minimize confusion. The above proof might, for instance, look like this:

1. P \(\leftrightarrow\) (A\(\rightarrow\)B) / P \(\leftrightarrow\) \(\neg\)(A \(\wedge\) \(\neg\)B)

2. P \(\leftrightarrow\) \(\neg\)\(\neg\)(\(\neg\)A \(\vee\) B) 1, Impl, DN

3. P \(\leftrightarrow\) \(\neg\)(A \(\wedge\) \(\neg\)B) 2, DM, DN

Generally, this is restricted to Com and DN since these are quick and simple changes rather than dramatic transformations of the formulas involved.

## Com and the Strictness of Rules

Once we have these rules on our tool belts, we can actually go back and be a bit stricter with some of the previous rules. Sometimes, your instructor may restrict the rules to apply only to a very specific pattern. Whereas other instructors will be more liberal with the rules. For instance, DS looks like this:

\[\begin{align*}{} & P \vee Q \\ & \underline{\neg P \ \ \ \ } \\ & Q \end{align*}\]

Notice how it’s the *left* disjunct being negated here. If we left the rule at that, we’d have to use Commutativity, if they were reversed, to switch around the P and Q before we do DS. So it’d have to look like this:

1. Q \(\vee\) P

2. \(\neg\)P /Q

3. P \(\vee\) Q 1, Com

4. Q 2,3 DS

In this case if you stacked rules, you’d need to cite “Com” as the first rule because DS requires Com to happen *first*.

Similarly with simplification, which sometimes is written to only allow you to simplify the *left* conjunct. Here’s an example of a proof that follows this rule strictly:

1. P \(\bullet\) ~Q

2. R \(\supset\) Q / ~R \(\bullet\) P

3. P 1, Simp

4. ~Q 1, Com, Simp

5. ~R \(\vee\) Q 2, Impl

6. Q \(\vee\) ~R 5, Com

7. ~R 4, 6, DS

8. ~R \(\bullet\) P 3, 7, Conj

### All 14 Rules (simplified form)

Table \(\PageIndex{1}\): All 14 rules (simplified form).

MP |
\[\begin{align*}{} & \Delta \rightarrow \Omega \\ & \underline{\Delta \ \ \ \ \ }\\ & \Omega \end{align*}\] |
CD |
\[\begin{align*}{} & (\varphi \rightarrow \Phi) \wedge (\Delta \rightarrow \Omega) \\ & \underline{(\varphi \vee \Delta) \ \ \ \ \ }\\ & (\Phi \vee \Omega) \end{align*}\] |

MT |
\[\begin{align*}{} & \Delta \rightarrow \Omega \\ & \underline{\neg \Omega \ \ \ \ \ }\\ & \neg \Delta \end{align*}\] |
simp |
\[\begin{align*}{} & \underline{\Delta \wedge \Omega \ \ }\\ & \Delta \end{align*}\] |

DS |
\[\begin{align*}{} & \Delta \vee \Omega \\ & \underline{\neg \Omega \ \ \ \ \ }\\ & \Delta \end{align*}\] |
conj |
\[\begin{align*}{} & \Phi \\ & \underline{\Omega \ \ \ \ \ }\\ & (\Phi \wedge \Omega) \end{align*}\] |

HS |
\[\begin{align*}{} & \Delta \rightarrow \Omega \\ & \underline{\Omega \rightarrow \varphi \ \ \ }\\ & \Delta \rightarrow \varphi \end{align*}\] |
add |
\[\begin{align*}{} & \underline{\Phi \ \ \ \ \ }\\ & (\Phi \vee \Omega) \end{align*}\] |

DM |
\( \neg ( \Delta \bullet \Omega) :: (\neg \Delta \vee \neg \Omega)\) \( \neg ( \Delta \vee \Omega) :: (\neg \Delta \bullet \neg \Omega)\) |
DN |
\( \neg \neg \Phi :: \Phi \) |

Com |
\( ( \Delta \vee \Omega) :: (\Omega \vee \Delta)\) \( ( \Delta \bullet \Omega) :: (\Omega \bullet \Delta )\) |
Trans |
\(( \Phi \rightarrow \Omega) :: (\neg \Omega \rightarrow \neg \Phi ) \) |

Associativity (Assoc) |
\([ ( \Delta \vee (\Omega \vee \Phi )] :: [(\Delta \vee \Omega ) \vee \Phi ]\) \([ ( \Delta \bullet ( \Omega \bullet \Phi )] :: [(\Delta \bullet \Omega ) \bullet \Phi ]\) |
Impl |
\(( \Phi \rightarrow \Omega ) :: (\neg \Phi \vee \Omega )\) |

### All 14 Rules (in simplified phrases, not strictly worded)

Table \(\PageIndex{2}\): All 14 rules (in simplified phrases, not strictly worded).

MP |
Find the antecedent, write the consequent |
CD |
Like two MPs side-by-side |

MT |
Negate the consequent, negate the antecedent |
simp |
Write a conjunct by itself |

DS |
Negate one disjunct, write the other |
conj |
Stick two formulas together with a ‘\(\bullet\)’ (or ‘\(\wedge\)’) between |

HS |
Take out the “middle man” |
add |
Add anything to the right of a ‘\(\vee\)’ |

DM |
Distribute or un-distribute a negation, change \(\vee\) to \(\wedge\) and vice-versa |
DN |
Add or remove two negations |

Com |
Switch conjuncts or disjuncts with their partners |
Trans |
Like MT but in replacement form |

Associativity (Assoc) |
Move parentheses around two \(\vee\)’s or two \(\bullet\)’s (or \(\wedge\)’s). |
Impl |
Switch between \(\rightarrow\) and \(\vee\), negate or un-negate the antecedent |

Now let’s do a couple of proofs involving the rules of replacement.

1. ~(A\(\bullet\)~B)

2. ~(B\(\bullet\)~C) / A \(\rightarrow\) C

If we pay attention to the placement of the A, the B’s, and the C in 1 and 2, we might notice that they’re pretty close to a Hypothetical Syllogism. There’s a B on the right of one and on the left of the other, so there’s a “middle man” that we can eliminate using HS. The result would be the conclusion. So our goal is to get 1 and 2 into the proper format to be able to apply HS.

How do we do that? Well, looking at 1 and 2, we might notice that there’s really only one rule which allows us to transform a negated conjunction. What rule is that??? Look at the table of rules above. I’ll wait. Right-O! DeMorgen’s rules are the only way we know of (in this class) to transform a negated conjunction or disjunction. So let’s just go ahead and apply DM and see what happens. We distribute the outside negation to both conjuncts and then we change the dot to a vee like so:

3. ~A \(\vee\) ~~B 1,DM

4. ~B \(\vee\) ~~C 2,DM

Remember we want to make conditionals. What rule allows us to change a disjunction into a conditional? Yep! Material Implication will do just that. We also don’t want all of those ugly negations, so we just apply double negation to remove them:

5. A \(\rightarrow\) B 3, DN, Impl

6. B \(\rightarrow\) C 4, DN, Impl

What was that last step we decided on waaaaay at the beginning of the proof? Right on. Hypothetical Syllogism to take of the middle man “B” and leave us with our conclusion:

7. A \(\rightarrow\) C 5, 6, HS

QED! (QED stands for the Latin “Quod Erat Demostrandum” or “which is what was to be demonstrated”. So we can say “QED” when we’re done with a proof or demonstration—that is, when we’ve reached the conclusion—as a way of saying “I proved what I was supposed to prove!”)

How about we try another?

**Thank you, Sir! May I please have another!**

Uh oh, it’s bad news when your students start talking to you like a Private in Boot Camp! Anyways, here’s the problem:

1. S \(\bullet\) ~M

2. (P \(\vee\) R) \(\supset\) M / ~P \(\bullet\) ~R

One way to think about a problem that looks like this is that we have to find ~P and then find ~R and then we can use Conjunction to stick them together to make the conclusion. This won’t really work in this case, though, since the P and R are right next to one another in the same subformula, so we won’t really be able to get them alone—at least not without some heartache. With that in mind, let’s try something different. What if we tried to negate (P \(\vee\) R)? If we did that, we could use a certain rule to turn ~(P \(\vee\) R) into (~P \(\bullet\) ~R). Which rule?

**DeMorgen’s Rule!**

Yepperino. So now all we have to do is negate the left side of the condition #2. How do we negate the left side of a conditional?

**Modus Ponens!**

Close, but not quite.

**Modus Tollens!**

There you go. Remember “Ponens” means affirming and “Tollens” means negating. In this case we want to negate. What else do we need in order to do an MT?

**~M**

Yes, dear student. You’re being so smart. Now how are we going to get that ~M? To finish the proof out, I’m going to use a stricter version of Simplification according to which one can *only simplify the left side *of a conjunction. If that’s the case, we’ll need to switch #1 around left to right before simplifying. Which rule allows us to do this? Com:

3. ~M \(\bullet\) S 1, Com

Excellent, now simplify and then Modus Tollens:

4. ~M 3, Simp

5. ~(P \(\vee\) R) 2,4,MT

What happens next? Remember way back to the beginning when we were talking about turning 5 into the conclusion. We’ll use DeMorgen’s Rule:

6. ~P \(\bullet\) ~R 6, DM

[1] Fun Fact: i.e. is an abbreviation of “id est”, which is Latin for “that is”. ‘e.g.’ is an abbreviation of ‘exempli gratia’, which is Latin for “for example”.