(From Lognet 96/1. Used with the permission of The Loglan Institute, Inc.)

Lo Lerci

(Letters)


Letters policy: Unless otherwise stated, letters addressed to logli in general, to The Institute, JCB, or any editor of Lognet will be considered as offered for publication. But it would be good if the writer explicitly offers. We reserve the right to edit letters, mostly just to drop material that has to do with ordering books, etc. If you are writing us by paper mail and your letter is a long one, we’d be  grateful if you’d enclose a soft copy on a diskette. We can read most word-processors and having your letters on disk saves us a lot of typing.

This first letter comes from Jerome Frazee, who has been a djori (member) of The Loglan Institute for decades and is one of our lodtua (logicians). Unfortunately, Jerome is not e-connected; so he is unable to serve  on our Lodgru (Logic-Group), a group of  lodtua who work on logical problems for The Institute as they arise. So we are especially delighted to hear from Jerome by p-mail from time to time:


Dear Jim, 

I saw the invitation from the academy (LN 95/2 p.4) for contributions to their subjunctive mood project, and I may have something worthwhile. 

Some years ago an odd thing occurred to me.  When I  finally felt I had a grip on it (but not an ideal solution) I wrote myself a paper on it without seeking a publisher. The main reason I didn’t seek one is because I don’t know enough about the subject. It may all be old hat!  Anyway, here it is with all its warts just as I left it. [Jerome’s article is in this issue and starts on page 16.—JCB] 

I also saw your article on sets and multiples (LN 95/2 p.22) Wow!  You have been reading my stuff or I have been reading yours!  Five years ago I wrote an article on sets and multiples exactly. That time I thought it was good enough for me to find a publisher, but I couldn’t. [Where is that paper? Don’t we want to see it?] 

I am almost positive that my set symbols  could be used as is to signify what you call “multiples” instead of sets without changing anything about their structure or even the way you do the calculating! [Jerome has developed a notation for the algebra of sets, and his paper on this notation was published in History and Philosophy of Logic, II (1990): 67-75. J and I are planning to republish it soon in either Lognet or La Logli. So you may look for it “soon” (that’s Institute-soon, uu, which isn’t very).] But, one would have to change the way one thought about...and used them.  

My [unpublished] realization was that sets are almost always used to talk about their members, not to talk about the one thing that a set is.  [Could this be the same distinction as the one I alluded to in my paper in LN 95/2? The one distinguishing  a linguist’s definition of set (set1) from a logician’s (set2)? See Randall Holmes’ first two letters in this column for two quite different explications of set2.] When talking about the members of sets only, what is the difference between the members of sets, the denotees of denoters, and the grinks of gronk...once we have the members, denotees, and grinks defined? If we are not talking about the “herds” what difference does it make?  Suppose all three are defined as even numbers or sheep, what can you say about members that you can’t say about denotees or grinks?  For example:  Let us say that the set S, the (each) denoter “sheep”, and gronk6 all have their members, denotees, and grinks defined as sheep.  Then we might say:  Some of the members of S are mothers.  Some of the denotees of (each) “sheep” are mothers.  Some of the grinks of gronk6 are mothers.  But now, why not just say; “Some sheep are mothers.”?  We have created denoter theory (which parallels speech).  But, with denoters we can do anything...I think.  Now, go ahead and create any kind of herd you want.  The world lies at your feet. 

If P is the set of all prime numbers and E is the set of all even numbers then saying that E and P are disjoint is like saying that (each) “even-number” is disjoint from (each) “prime-number”, i.e. they have no denotees in common.  All the machinery stays the same and you can hardly tell which one you are dealing with.  (Of course, you have to have a way to define the denotees in denoter theory just as you must have a way to define the members in set theory—and in so doing you give meaning to the denoters, something previously unknown.  If you think that the fact that you have many denoters but only one set creates a problem, remember that many similar things are what sets—and now denoters —were made to handle!  This is where sets and their mechanics shine!  But, what is a herd of all even numbers thing?)

Denoter theory (better still, denotee theory—for everything is a denotee) pulls everything up a level and removes a mysterious middle man.  I think also that some problems of set theory do not appear.  The denoter of all denoters, for example, is simply the (each) denoter “denoter”.  By the way, does a “sheep” seem like any kind of herd? 

Thank you for your time.

Sincerely yours, Jerome Frazee


The following was originally an e-mail letter from our Cefli Lodtua (Chief Logician) Randall Holmes written in August 1995, and thus before September’s LN 95/2, when my own article on “Sets and Multiples”  came out. R’s letter was part of what was then our on-going discussion in the Lodgru about what “sets” actually were, and how they could be differently viewed on the basis of their different uses. This letter and the next one—also by R—helped me shape up, in opposition, the linguistical view of designating sets, masses, and multiples in a logical language, one that was later presented in my two articles on these topics in this and the last LN.

 

Dear JCB,

 Thinking of sets as the mass of their elements is a very misleading metaphor. [I didn’t think I was, soi crano; see “Sets and Masses” in this issue.] Another metaphor for a set is as a list or catalogue, in which an element is represented by a name of the element, not by the element itself.

 I expand on this metaphor: suppose that each object (piece of the physical universe; objects are allowed to overlap) of interest to us has a unique token associated with it (which we might call its “name”) according to some scheme. Any two tokens are disjoint physical objects. There is at least one physical object (call it the “set label”) which is disjoint physically from every token.  Any mass object made up of some collection of tokens and the set label will be called a “set”.  An object will be a member of a set if its name (the token associated with it) is a part of the set.

 In this picture, the empty set exists (the “set label” by itself is the empty set...this is the reason that the “set label” is provided; I have no scruples about a null physical object, but you are known to). The Russell class exists as a physical object (the mass consisting of the set label and all tokens of sets which do not include their own token) but cannot have a token; the result of the paradoxes in this context is the conclusion that not every object can have a token; an object which does not have a token cannot occur as a “member” of a “set”. A “real set”, in the sense of set theory, is a set in the sense already outlined which additionally happens to have a token associated with it; the “sets” which do not have tokens would be called “proper classes” in the usual set theory.

 If the physical universe is of infinite extent and so contains infinitely many disjoint physical objects, a scheme of this kind can be found which realizes any consistent system of set theory by standard results in mathematical logic (though such a model of set theory would be unsatisfactory in various ways!).  So it is possible to suppose that sets are physical objects.  But notice that the relation between tokens and objects must be essentially arbitrary. A Boy Scout troop, for example (as a set of Boy Scouts) would be realized as the union of the “names” (tokens) of the individual Boy Scouts.  Its token (if it were an object realized in the scheme) would have to be disjoint from the tokens of the individual scouts. The Boy Scouts might be supposed to be each his own token (nothing prevents this), but then any part of a Boy Scout which we might want to name (say his ear) would have to have a token disjoint from the Boy Scout himself (so from the ear itself).  So we cannot arrange for all physical objects of interest to be their own tokens.  If each Boy Scout were his own token, the Boy Scout troop (as a set) would be the mass of Boy Scouts—plus the “set label”—and so could be supposed to carry a log (the set label admittedly doesn’t help, soi crano). But the set of individual cells of the Boy Scouts would be a completely disjoint physical object from the set of the Scouts and would not carry the log (though the mass of these cells would).

 The underlying idea in this model is to preserve the image of a set as a collection of disjoint objects which can be clearly individuated (the tokens provide this) even though their elements (the objects represented by the tokens) may overlap in messy ways.  The tokens and the way in which tokens “name” objects play no other role in this model than to clarify this.

         —Randall

 P.S.  The physical model of set theory here is entirely my own invention; this is not what a typical mathematician (or even I) would say a set is!


This letter certainly makes clear why sets2 cannot carry logs! But it does not make clear why sets1, the collections of denumerable individuals designated as such by the routines of ordinary language—to be more precise, the designata of the set designations of ordinary language—cannot carry logs; for of course some of them do. What is important—in both formal logic and in a “logical language” (in the validity-conserving sense that Loglan is or is slowly becoming)—is that we draw the clearest possible distinction between collectivities among whose members we wish to “distribute” some claim (Each of the men is 7-or-more-feet tall) and those about which we wish to make some single, “undistributed” claim (The collective weight of the men is about 3,000 kilos). It is to do this sort of thing clearly that we need a designative apparatus in our logical language that makes the distinction between these two sorts of claiming absolutely plain.

In R’s next letter—also written in August 1995—R develops a logician’s definition of set, i.e., set2, more formally. This letter was sent to the Logli List.   R’s technical abbreviation iff means if and only if. 

 

Dear JCB (and other interested parties):

 If one introduces an (otherwise undefined) relationship called “membership” and a predicate “sethood” and imposes the following rules:

If x is a member of y, then y is a set. (Only sets have members.)

If x and y are sets and, for any z, z is a member of x iff z is a member of y, then x = y. (Sets with the same members are the same set.)

then one has specified what a set is for a mathematician, as far as this is possible.

 The rest of the questions about sets revolve about what sets there are, exactly.

 What sets do is provide us with objects that reify collections.  A use for sets is to reify properties (some have argued that sets are reified properties—that is “universals” in the philosophical sense—Quine, for example).

 For any sentence P, we can define {x | P}, ‘the set of x such that P’ as the unique y (y should not be mentioned in P) such that for any x, x is a member of y iff P is true (of x).

 There are sentences P for which {x | P} cannot exist, such as ‘x is not an element of x’.  The property of non-self-membership cannot be realized by a set.  The fact that {x | x is not a member of x} does not exist is known as “Russell’s paradox”.

 The question that then arises is ‘What properties can be realized by sets?’. There are several different possible answers which work for the purposes of mathematics. The usual approach is based on the idea that if A is a set already given and P is any sentence, {x | x is a member of A and P} will exist. Quine’s own set theory is based on another approach. Loglan should be agnostic as between these approaches, I think.

 Why can’t sets carry logs?  My take on this is that the only thing we should admit knowing about a set is what members it has. [We mathematicians, surely. We plain speakers may wish to admit that we know a good bit more about the “collectivities” (to use an agreeably neutral word) that catch our attention.]

 The collective objects which carry logs in Loglan are mass objects; lo to mrenu can carry a log together.  Mass objects are not sets.  The reason that they are not sets is that there is no unequivocal way to establish what members they have.  Lo to mrenu, for example is also a mass of human cells, and as such has a great many more than two “members”. [But there are numerous practical ways—like stopping at the shells of eggs and the skins of verterbrates—that provide effectively univocal ways of separating the members of sets from one another, and so counting or listing them.]

 A physical exemplification of number, to use an example of JCB’s, is not simply a physical object.  A pile of three checkers pieces, for example, is not an exemplification of “three” without further (implicit) assumptions by the agent who interprets it in this way. The agent has to know that the pile of three checkers pieces is to be broken up into checkers pieces rather than (say) atoms.  Naive human beings can communicate “three” to one another in this way only because they have a natural predisposition to see checkers pieces rather than atoms. [Indeed they do; and speech is an invention of this so-disposed animal.] A logical analysis of the “threeness” of the pile of checkers pieces has to include “checker-piece-hood” or “the set of all checkers pieces” as a component (the mass of all checkers pieces will not do).  (In Japanese, there are different sequences of numbers depending on what sort of thing is being counted; there’s a real example of what’s going on here.) [This is true; but  enumerating things is neither as problematic nor as arbitrary as R is suggesting here. Numerosity perception—of eggs, stones, people, and checkers pieces—is apparently biologically built into us...just  as the capacity for speech is. Moreover it is built into a phyletically very diverse assemblage of taxa, suggesting great antiquity, as the capacity for speech isn’t. What I am suggesting is that enumerating things is probably far more ancient than designating things ...which is when the issue of sets arises.]

 Historically, the modern notion of set arose from a logical analysis of number. It was decided that number is a property of sets (whatever they are); Frege defined the number three as the set of all sets with three elements, for example (which is not a circular definition; it is possible to say what it means for a set to have three elements without mentioning “three”).  This does not work in the set theory now usually used (this set is “too large”), though it does work in some alternative set theories still in use; but it is still the case that cardinal number is regarded as being a property of sets.

 Given this history, one has to say that a physical approximation to a “set” is that pile of three checkers pieces; but the set of atoms in those checkers pieces, which has exactly the same associated physical extent, is a different object (it has a different cardinality, for example).  Sets are not to be confused with physical masses. [But who has? Except for my 1989 L1 error of equating “ze-animals” with masses—see my article on “Sets and Masses”in this issue—which apparently set off a ripple of misplaced belief in their identity, this is not a widespread error.] Even worse, consider the set of sets of those three checkers pieces (there are eight such sets). A more realistic example: consider the set of Congressmen, the set of all committees of Congress, and the set of all possible (memberships for) committees of Congress (committees are not sets, exactly, because two distinct committees can have the same members!).

 Pre-modern concepts of “set” were mixtures of the notions “mass object” and the true notion of “set”.  Modern uses of “set” also derive from the notion of “property” and the philosophical notion of “abstraction” or “universal”.

 A set does not have its elements as parts (its subsets can be thought of as its parts).  A set with one element is generally distinct from that element (in the usual set theory, this is always true; to see that it must sometimes be true, consider the set {1,2}, which has two elements (the numbers 1 and 2) versus the set {{1,2}}, which has only one element (the set {1,2}).  The relation of part to whole is transitive (parts of parts are parts) but this is not true of membership.

—Hue Randall

 

R’s third letter is a letter to the Logli List about his theorem-prover, something about which all of us have a great appetite to know more.


Dear All,

 I have modified my theorem prover to take an input language which looks like an artificial speakable language of the same general sort as Loglan (in the most general sense; it has no affinity with Loglan in any linguistic sense!), but with phonetics and grammar which are considerably simpler [than Loglan’s] to implement on a computer (though probably not nearly as satisfactory from a human standpoint).

 This was interesting for me as a test of the modu-larity of the prover; the parser and display functions can be extensively modified without “breaking” the reasoning engine.

 The “language” which I implemented has the following structure:

 Word classes:  constants CVCVCV...C; infixes/prefixes VCVCVC...VC; and special words (such as the parentheses  spra ... spru) that are identified by CCC clusters (ick!). There are few of these last, and it is easy to arrange for them not to interfere with the unique resolvability of words.

Constants never occur [next] to one another, and prefixes/infixes only under special circumstances; thus word boundaries between these are always marked by CC, VV, or a dramatic pause between infix and a following prefix. The CCC clusters indicate the presence of parentheses and other special constructions and cannot create ambiguities as long as there are not very many of them and they are chosen carefully.

 Grammar: terms of the language are built from constants as atomic building blocks using prefix and infix operators with user-definable precedence and left or right associativity [rules] ([these] can be changed by the user in the middle of a session, with immediate changes in the form of terms [becoming] visible).  This gives a quite general grammatical structure.

 An interesting point about the grammar is that it is almost completely decoupled from the semantics, which is not the case in Loglan. The “language” which I implemented has a complete phonetics and grammar without yet having any semantics at all; any number of “logical languages” could be implemented within this framework (not that I necessarily recommend it for this purpose!). This decoupling of grammar and semantics is something which I recommend to future linguistic engineers.

 New constants and infixes/prefixes may be declared freely by the user; “constants” beginning with “v” are variables [and] need not be declared, and are always implicitly universally quantified. As noted, order of operations and left/right associativity can be freely modified by the user.

 Automated reasoning support: Equational reasoning (as in algebra) with the ability to write proof tactics (programs expressing proof strategies) in an internal programming language (whose constructs are also “speakable”, for what that is worth).  In the version of the prover which uses more familiar mathematical notation, I wrote a complete decision procedure for tautologies, for example.

 Anaphora and variable-binding are not now supported, but will both come to be supported when certain projected upgrades are made in the prover.

 Having done this, I see some interest in the project of seeing how to take a language with a more general kind of grammar, such as Loglan, and interface it with the theorem prover.  The general idea would be to use an existing parser for the target language to generate a parse tree for an utterance.  There is a straightforward way to convert a parse tree into a term in the kind of language which the theorem prover is designed to handle, and converting the display functions to display sentences in a more general kind of language is easy.

 I certainly do not recommend the framework which I described above as any kind of alternative to Loglan; it is simply a small test-bed.

 Difficulties with implementing Loglan itself:

 1.  The phonetics of Loglan are considerably more complicated.  For this, I need an algorithm for resolving and labelling a stream of letters. [The Institute’s current Resolver Project should soon provide that algorithm.] My intention if I proceed on my own is to work with a manageable subset: probably the legal words of 1975 Loglan minus names.

 2.  The grammar of Loglan is large. The theorem prover’s user-definable parsing is restricted to infix and prefix operations with definable order of operations and left/right associativity. More of Loglan’s grammar may be reducible to this than one might think, but not all of it; a certain amount of preprocessing would need to be done to get Loglan utterances into a shape that the theorem prover would know how to handle. The approach which I would probably take is to produce a language of the type directly readable by the prover as similar to Loglan as possible, then write a preprocessor to convert between (a subset of) Loglan and this intermediate language.  An example of a likely difference between (subset) Loglan and the intermediate language: there would need to be some kind of separator between arguments in termsets in the intermediate language, which a preprocessor would have to be able to supply.

 I would be able to do a lot of testing for (2) without addressing (1), by using the phonetics of the “language” already implemented and giving L words different shapes.  This is fortunate, because I’m much more interested in (2) than in (1) myself!

                                                 —Randall Holmes