Reading: sections 1.2-3 (continued). We did a complete discussion of the 16 binary propositional connectives in class. They are as follows: ``true'' (true no matter what the arguments P and Q are) P|Q (inclusive or) P<-Q ``P'' (ignore second argument) P->Q ``Q'' (ignore first argument) P<->Q P&Q P nand Q P <+> Q ``~Q'' P -/-> Q (i.e., ~(P->Q)) ``~P'' P <-/- Q P nor Q ``false'' An example of a connective which is associative but not commutative is the connective which simply gives P. An example of a connective which is commutative but not associative is ``nand''. Some expressions in propositional letters and propositional connectives are always true, no matter what the values of the propositional letters are taken to be. For example: P | Q | Q->P | P->(Q->P) ------------------------ T | T | T | T T | F | T | T F | T | F | T F | F | T | T Formulas like P->(Q->P) which are "always true" are called _tautologies_. We have a special notation |= P->(Q->P) which denotes the statement that the formula P->(Q->P) is a tautology. Point 3: (Definitions of the notions of "entailment" and "equivalence"; these are not propositional connectives, and in particular are not the same as implication and biconditional.) We say that a formula A entails a formula B if A->B is a tautology. A |= B means the same as |= A -> B We say that a formula A is equivalent to a formula B if A<->B is a tautology: A == B means the same as |= A <-> B. A == B also means that the propositional formulas A and B will always have the same truth values, whatever values their component letters may take. The symbols |= and == are NOT propositional connectives, and sentences involving these will not appear linked by propositional connectives. We do introduce the following notations (allowing "negation" of sentences with entailment and equivalence): |/= A for "A is not a tautology" A |/= B for "A does not entail B" A =/= B for "A is not equivalent to B" In particular, |= should not be confused with -> and == should not be confused with <->. P -> Q is a propositional formula which might be true (if P is true and Q is true) for example, but also might be false (if P is true and Q is false). P |= Q is false, because P->Q _can_ be false. There is a basic difference in logical form between P->Q and P|=Q: the first is a sentence involving unknown propositions P and Q, but the second, in spite of appearances, is not: it says something about all sentences P and Q: "for all propositions P and Q, P->Q" is a paraphrase of its meaning (which makes it clear that it is false). Notice that because we showed above that P->Q and P<->(P&Q) always have the same truth value, we can conclude that 1. P->Q == P<->(P&Q) but even more generally for any formulas A and B 2. A |= B is true if and only if A == A&B is true. (entailments can be coded by equivalences) You might want to think about the difference between the statements 1 and 2. Note in particular that the variables A and B (which can be replaced by arbitrary propositional formulas) are not propositional letters of the kind we put in propositional formulas! One way of understanding the differences between entailments and equivalences on the one hand and implications and biconditionals on the other is to regard the former as containing hidden quantifiers: P |= Q could be read as ``for all P and Q, P->Q'' (which is obviously false). This is a little weird because our formal language with quantifiers (as we recall it from Math 187) does not allow quantifiers over all propositions. But it is possible to allow this. Point 4. (Issues of syntax: precedence and grouping; relationships with associative and commutative laws.) We could define the class of propositional formulas like this: 1. any letter is a propositional formula. 2. if A is a propositional formula, (~A) is a propositional formula. 3. if A and B are propositional formulas, (A&B), (A|B), (A->B), and (A<->B) are propositional formulas. 4. Only those things are propositional formulas which can be shown to be propositional formulas using rules 1-3. In clause 3, we could provide for more operations. This language definition would require us to use a lot of parentheses! We can certainly omit the outermost parenthesis from any formula without ambiguity! In the book, Dr. Grantham adopts a convention allowing him to eliminate some parentheses: negation is always assigned the highest precedence (a negation symbol governs the shortest possible expression it can -- i.e., ~A & B means (~A) & B, not ~(A & B)). We could also do this by changing point 2 of the definition to ``if A is a formula then ~A is a formula''. Dr. Grantham does not define a precedence convention for the other connectives (one is required, with exceptions indicated above and one more below, to indicate all parentheses). The additional exception is that parentheses may be omitted where an associative operation is involved: conjunction, for example, is associative, so we are permitted to write A & B & C, without committing ourselves to (A & B) & C or A & (B & C), because these forms are logically equivalent. In the theorem prover implementation we will explore later, there are explicit precedence and grouping conventions which we now describe: negation has the highest precedence conjunction and disjunction have the next highest implication and converse implication have the next highest the biconditional and xor have the lowest An expression like X <-> Y & Z can then be disambiguated as X <-> (Y & Z), because & has higher precedence. In an expression like X & Y | Z, where the two operators have equal precedence, we will understand that the grouping is to the right (it is more usual to group to the left, but there are practical advantages to grouping to the right in calculations): so this will be read X & (Y | Z). The same applies where the two operators are the same: X & Y & Z will be equivalent to X & (Y & Z). In written work, you may use the precedence convention I give (if you clearly indicate that you are doing so); but I would advise following Grantham's advice and using as many parentheses as are needed to make your meaning clear. Grantham talks about "generalized associativity" in section 1.3: he discusses how an associativity axiom like X & (Y & Z) = (X & Y) & Z can be used to justify removing parentheses from expressions of any length. We will explore this issue using the theorem prover later. He also points out the difference between associativity and commutativity: an associative law permits the removal of parentheses, but not changes in the order of arguments of an operation, for which you need a commutative law X & Y = Y & X as well.

This is where we finished on January 19.