\documentstyle{article}
\title{Mark2 Prover Documentation}
\author{M. Randall Holmes}
\begin{document}
\maketitle
\tableofcontents
\section{Copyright Notice}
Copyright 1995, M. Randall Holmes.
\section{Introduction; the Basic Session Model}
Mark2 is an interactive equational theorem prover. The basic model
for a session under this prover is that the user enters a term in the
internal language of the prover, further restrained by declarations of
constants and operations in the theory in which the user is working.
This term can be thought of as the left side of an equation which the
user is setting out to prove. The user will then modify this term
using equational theorems already proved or given as axioms as rewrite
rules until he converts the equation to the desired form. The user
views a selected subterm of the current version of the right side of
the equation; he may issue ``movement'' commands which change which
subterm is selected, or he may apply theorems as rewrite rules to the
selected subterm. All uses of equations as rewrite rules apply only
to the selected subterm, not to its proper subterms. When the equation
has assumed the form the user wants, it is ``proved'' as a theorem and
becomes available for use as a rewrite rule.
There are some additional factors which complicate this basic model,
but the broad outline has remained fixed through the development of
the prover. One thing which should be clear is that this is not a
model for an automated prover. It is also clear that this procedure
would be very laborious without access to automation of reasoning
steps.
The ability to automate reasoning steps is provided by a tactic
language. A distinctive feature of the tactic language is that
tactics or ``programs'' in this languages are represented as equations
in the internal language, i.e., as theorems. The way in which this is
achieved will be discussed below. This differs from the situation in
provers in the tradition of Edinburgh LCF, in which theorems and
tactics are objects at quite different levels written in different
languages.
The logic immediately apparent to the user in this prover is simple
equational or ``algebraic'' logic. There is a higher-order logic
implicit in the definition and variable-binding procedures of the
prover, which can be thought of as a polymorphic version of the simple
type theory of Russell, in which no explicit type indices are ever
needed. It is actually a version of Quine's New Foundations.
For precise syntax of commands mentioned in the text, look in the
Command Reference section. Also look at the beginning of the Command
Reference section for information on syntax restrictions imposed by ML
(arguments must be quoted and unit arguments are needed for operations
which don't have real arguments). The Command Reference represents an
older layer of documentation than the rest of the test, so may not
include descriptions of some commands referenced in the text. Later
versions will be updated.
An old document with a tutorial and an even older command reference
section of its own is appended; the earlier one is more reliable.
Both have been updated to some extent in the course of the preparation
of this document; they are organized along somewhat different lines.
\section{The Internal Language}
\subsection{General Syntax}
\subsubsection{Atomic Terms} These fall into four categories:
\begin{description}
\item[numerals:] {\tt 0} and non-zero-initial strings of digits.
Subsumed under constants for most purposes in this document; the
main difference is that numerals are predeclared.
\item[constants:] Strings consisting of characters from the set
consisting of the letters (upper and lower case are allowed and
distinct), digits, and the characters {\tt\_} and {\tt ?}. A
numeral is not a constant, and a constant may not begin with {\tt
?}. Numerals are subsumed under constants in most respects,
except that they are predeclared.
\item[free variables:] As constants, except that they begin with {\tt
?} and should contain in addition some non-digit.
\item[bound variables:] A nonzero numeral prefixed with a {\tt ?} is a
bound variable.
\end{description}
\subsubsection{Operators}
Operators may be either unary prefix or binary infix. A unary prefix
operator is understood in the current version as representing the same
operation as the infix with the same shape, but with a default left
argument which depends in the infix. Operators fall into three
classes:
\begin{description}
\item[constant:] Strings of special characters other than {\tt ?},
{\tt\_} or paired forms like brackets, braces, and parentheses,
additionally not beginning with {\tt \^{ }} or {\tt :}. A single
{\tt \^{ }} or {\tt :} is also a constant operator.
\item[free variable:] As above, but with initial {\tt \^{ }} and at
least one other character.
\item[type-raised:] Initial {\tt :} (when followed by at least one
other character) represents a unary operation on operators (variable
or constant) which can be iterated. Details below.
\end{description}
\subsubsection{Compound Terms}
\begin{description}
\item[terms constructed with operators:] Terms are constructed with
unary prefix or binary infix operators in the obvious way. Spaces
are not needed between atomic terms and operators; the only
situation where spaces are needed inside a term is when an infix is
followed by a prefix (and this can be avoided by using parentheses).
The default order of operations is that all operators have the same
precedence and associate to the right as far as possible. The user
can set the precedence of an operator (and reset it at any time).
Parentheses {\tt ( )} may be freely used in user-entered terms;
unneeded parentheses are not displayed by the prover. The
precedence of an operator is a non-negative integer; higher integers
represent operators which bind more tightly, and the choice of left
or right association for binary operators is determined by the
parity of the precedence (even associates to the right, odd to the
left).
As noted above, a prefix operator is always regarded as an
abbreviation for a binary operator with the same shape with a
default left argument depending on the operator. Default left
arguments for operators can be set individually by the user.
\item[variable binding contexts:] The operation of enclosing a term
in brackets {\tt[ ]} produces a variable-binding context (a
lambda-term). The variable bound in a context is determined by the
depth of brackets: {\tt ?1} is bound in the outermost context, {\tt
?2} in the next to outermost, and so forth. Details of change of
variables when substitution occurs are automated. There are
restrictions on the ``relative types'' with which bound variables
can occur in variable binding contexts which will be explained
below; terms which do not meet these criteria will be parsed but
rejected on the application of further tests.
The simplest use of the variable binding context is as a constant
function constructor, when the bound variable is not present.
Limited higher-order matching is now available. At level 1 ({\tt ?1}
free) the term {\tt ?f @ ?1} matches any level 1 term: ``?f'' will
match the result of binding this term with brackets. Similarly,
substitution of a function defined by a variable binding context for
{\tt ?f} in this term will actually cause the ``unbinding'' of the
function. Terms of the forms {\tt ?f@?2} have similar privileges at
level 2 ({\tt ?2} free), and similarly for each level. This allows
the direct proof of a theorem equivalent to the built-in theorem {\tt
VALUE}. When new variables are introduced by theorems, they are
introduced as functions of the appropriate bound variable for the
level at which they are introduced (this prevents the application of
theorems which introduce new variables in cases where a stratification
error would be introduced; this is an unintended side-effect, but it
is fairly easy to work around by replacing theorems which introduce
new variables with theorems to which the values of the new variables
are supplied as parameters).
\end{description}
\subsection{Navigation within Terms}
The basic concept here is that each non-atomic term has a left and
right subterm. The left and right subterms of a variable binding
context are the same largest proper subterm. The left subterm of a
term with a top-level prefix operator is the default left argument
associated with that operator (it will be displayed in both local and
global displays if it becomes the selected subterm).
The user sees the whole right term of the current version of the
equation being proved, and sees the selected subterm as well. In the
display of the entire term, the selected subterm is enclosed in braces
{\tt \{ \}}. Braces are not currently used in any other way by the
prover.
The basic movement commands are {\tt up}, which changes the selected
subterm to the smallest subterm having the current selected subterm as
a proper subterm, {\tt left} and {\tt right}, which change the
selected subterm to its left or right subterm, respectively, and {\tt
top}, which sets the selected subterm to the whole subterm.
More sophisticated movement commands, {\tt upto}, {\tt downtoleft},
and {\tt downtoright}, move to remote subterms matching a term
argument. New commands listed in the first command reference section
below allow one to search for instances of left and right sides of
theorems.
The simplest rewriting commands, {\tt applythm} and {\tt
applyconvthm}, allow one to apply an equation whose name is given as
an argument to the selected subterm. The choice of command determines
whether the left or right side of the equation is matched to the
selected subterm. These commands do not invoke the tactic
interpreter; no automatic rewriting takes place. New commands {\tt
applyenv} and {\tt applyconvenv} allow similar use of saved
environments.
\subsection{Declarations}
Certain symbols need to be declared. If undeclared identifiers
appear, the term entered will be parsed, but an error message will be
displayed. Declarations occur automatically when some operations
(such as proving theorems or making definitions) are executed.
\begin{description}
\item[constants:] Atomic constants, other than numerals, need to be
declared. Free variables and bound variables do not need to be
declared. The {\tt declareconstant} command is used.
\item[constant operators:] Non-variable operators need to be declared. An
operator can be declared as unary, in which case it receives a
default left argument of {\tt defaultprefix} automatically (but can
still be used as a binary infix!); an operator can also be assigned
a specific default left argument, which must always be an atomic
constant or numeral. An operator always has two integers associated
with it, its ``left type'' and ``right type'', whose role in
defining the notion of relative type will be discussed below.
Operators can be declared with given left and right types or with the
default values of 0 for both. The {\tt declareinfix} or {\tt
declaretypedinfix} command is used.
\item[free variable operators:] Free variable operators do not need
to be declared unless it is desired that a left and right type be
associated with them, in which case a declaration is necessary. The
same commands are used as for constant operators. An operator
variable with declared types will match only operators with the same
types; an operator variable without declared types will match any
operator. The {\tt declareopaque} command can be used to block an
untyped operator variable from being declared with a type later.
There are various kinds of special declaration which will be
discussed elsewhere.
\end{description}
\subsubsection{Relative Type and Stratification}
In any term, each subterm can be assigned a {\em relative type\/} in
accordance with the following procedure: the relative type of the
entire term is 0; the relative type of the left/right subterm of a
variable binding context is one less than the relative type of the
variable-binding context; the relative type of the left (right)
subterm of an operator term is obtained by adding the left (right)
type of the operator to the relative type of the operator term.
A variable binding context entered by the user must have the property
that the relative type of each occurrence of the variable bound in it
is the same as the relative type of its left/right subterm. Terms
violating this condition can be created during proofs but cannot be
stored in theorems. Terms satisfying this condition for all subterms
which are variable binding contexts are said to be {\em weakly
stratified\/}.
A term is said to be {\em stratified\/} if it is weakly stratified
and, in addition, each free variable appears with no more than one
relative type. This notion is used in the definition facility.
These notions are best understood if one imagines (as is not the case)
that each term has an integer type associated with it. Type $n$ will
be identified with the Cartesian product of type $n$ and type $n$
(relative types of the infix {\tt ,} of ordered pairing are both 0)
while type $n+1$ is the type of functions from type $n$ to type $n$.
This explains the typing of variable binding contexts (lambda-terms)
and is also illustrated by the fact that the left type of the infix
{\tt @} of function application is 1 and its right type is 0. The
type theory being implemented can be thought of as a polymorphic
version of simple type theory with a linear hierarchy of function
types rather than set types; in fact, the types are all identified as
in Quine's New Foundations. Any difficulty with the open problem of
the consistency of New Foundations is avoided by not requiring that
function application be extensional; New Foundations with urelements
is known to be consistent.
\subsubsection{Predicativity and Opacity}
Certain special conditions complicate the definitions of weak
stratification and stratification. Theories started with the {\tt
clearpredicative} command place the restriction on stratified or
weakly stratified terms that no variables of appropriate types occur
within any variable binding context or in the left (right) subterm of
an operator term if the left (right) type of the operator happens to
be negative. The {\tt impredicative} command lifts this restriction
for a theory. This used to be the default state of the prover, so
theory files produced with old versions of the prover may be in this
state.
The {\tt declareopaque} command can be used to place additional
restrictions on stratification notions. Operators may be declared
``opaque''; a term built with an opaque operator should not contain
any variables of the kind under consideration to be stratified or a
subterm of a stratified term. The {\tt declareopaque} command has an
additional application: it can be used to block the later declaration
of an untyped operator variable as typed. Operator variables which
are not declared with types are treated as opaque in any case, since
relative types cannot be assigned to subterms of terms built with such
operators. The declareopaque command can only be used to declare a
new operator; existing operators cannot be made opaque.
In the case of weak stratification, the restrictions indicated in case
of predicativity or opacity apply only to bound variables; in case of
full stratification, they apply to all variables.
\subsubsection{Refinements: Embedded Theorems and Type Retractions}
The stratification function has recently been refined in two ways.
Where embedded theorem names appear, they are checked for
stratification internally but their type relative to the term to which
they are attached is not checked. This is safe, since embedded
theorems have no effect on term value. In addition, a predeclared
infix {\tt :} is provided which serves to generate type labels: the
term {\tt t : x} is the result of applying a retraction to {\tt x},
determined by {\tt t}, whose range is a ``strongly Cantorian set'' (a
technical notion in set theories related to New Foundations). This
has the effect of allowing the term to be freely raised and lowered in
type. The effect of this on stratification is that one may freely
raise and lower the types assigned to {\tt x} and its subterms for
purposes of stratifying the term in which it occurs; the type label
should contain no type information (any variables in {\tt t} must be
restricted to strongly Cantorian sets). The only way in which types
in {\tt x} may influence stratification is that the difference between
the types of two variables occurring in {\tt x} may be fixed, and this
will have to agree with the stratification of the context. This is
motivated by the theoretical claim which we have made (in our paper in
{\em Modern Logic\/}, vol. 4, no. 1) that ``strongly Cantorian set''
corresponds to ``data type'' in a model of computing based on this
kind of set theory.
A further refinement along the same lines is that functions and
operators whose input or output is restricted to a ``type'' (strongly
Cantorian set) (this must be witnessed by a theorem) can be declared
as ``scin'' (for strongly Cantorian input) or ``scout'' (for strongly
Cantorian output). Stratification restrictions are relaxed
accordingly, without any need for explicit type labels. For example,
the equality infix {\tt =} or a universal quantifier function {\tt
forall} can be declared ``scout'', since their output is boolean, and
so the stratification function of the prover can exploit the fact that
types of boolean-valued expressions can be freely raised and lowered
in order to permit more complex abstractions. This feature should
have the effect of minimizing the use of explicit type labels. Note
that any use of operators declared ``scin'' or ``scout'' in abstraction
terms has the effect of introducing dependencies on the witnessing
theorems for those operators.
Examples of permissible types are booleans (or any finite set) and
natural numbers. Cartesian products and function spaces built from
strongly Cantorian sets are strongly Cantorian. Recursive and
polymorphic type constructions will also work. The system of absolute
types which can be built under Mark2 is thus as strong as that usually
used in computer science.
\subsubsection{The Definition Facility}
The commands {\tt defineconstant}, {\tt defineinfix}, {\tt
definetypedinfix}, and {\tt defineopaque\/} can be used to define
(and automatically declare) constants or operators.
The predeclared operators {\tt @} (function application) and {\tt ,}
(ordered pair) have a special role in the definition facility.
A definition introduces an equation whose left side must have a
special form.
We first define the class of {\em argument lists\/}. A free variable
or iterated constant function of a free variable (built using
brackets) is an argument list. Any pair (built using the {\tt ,}
infix) of argument lists is an argument list. All argument lists are
constructed in this way.
If a constant is being defined, the constant itself is a possible left
side for its defining equation. If any term is an admissable left
side, the term built by applying that term to an argument list (using
the {\tt @} infix) is an admissable left side. All admissable left
sides for constant definitions are constructed in this way.
If an operator is being defined, a unary or binary term whose left and
right subterms (the left may, of course, not be present; a default
left argument of {\tt defaultprefix} is automatically declared in this
case) are argument lists is an admissable left side. The result of
applying an admissable left side to an argument list is an admissable
left side. All admissable left sides for operator definitions are
constructed in this way.
Notice that ``curried'' higher order functions can be implicitly
defined!
The further conditions on a definition are that the constant or
operator being defined be undeclared, but that all other constants
appearing in the defining equation be declared, that every free
variable occurring on the right side also occur on the left side (no
operator variables may appear in a definition), and that the defining
equation considered as a term ({\tt =} is predeclared and has left and
right types of 0) be stratified.
Implicit function definitions using this procedure are guaranteed to
avoid paradox (they are valid in the context of New Foundations with
urelements). The ability to apply the constant function operator to
variables allows more control over the relative typing of arguments;
attention to relative type is also important when currying is used (a
curried function is {\em not\/} equivalent to a similar-looking
function with an argument list built using pairing; relative types are
quite different!).
The {\tt defineitconstant}, {\tt defineitinfix} and {\tt
defineittypedinfix} commands allow one to use the selected subterm
of the right side of the current equation as the right side of a
definition.
\section{The Tactic Language}
The tactic language of Mark2 is actually a predeclared part of the
internal language. Tactics are represented as equations in theories,
and this is achieved via a device for representing theorems and
operations on theorems as objects in the internal language of each
theory.
\subsection{Predeclared Operators}
The predeclared operators which implement the tactic language are the
{\em rule infixes\/} {\tt =>}, {\tt <=}, {\tt =>>} and {\tt <<=}.
There are additional related operators {\tt *>}, {\tt <*} and {\tt !!}
which will be explained below. The operators {\tt @} and {\tt ,} of
function application and pairing have special roles in the tactic
language as well. The predeclared operator {\tt =} can be used as a
theorem constructor. There are special unary ``delay'' operators {\tt
\#} and {\tt \#\#} explained below.
There are also predeclared mathematical operators
{\tt +!}, {\tt *!}, {\tt -!}, {\tt /!}, {\tt \%!}, {\tt term} is the same as the
denotation of the term {\tt term}. {\tt thm} is expected to denote a
theorem which will be applied as a rewrite rule to {\tt term} when the
tactic interpreter {\tt execute} or the tactic debugger {\tt steps}
are invoked. If it turns out that the left side of {\tt thm} does not
match the structure of {\tt term}, its ``application'' will have no
effect on the presentation of {\tt term}.
The denotation of {\tt thm <= term} is similar, except that the
theorem {\tt thm} is to be applied in the opposite sense (when {\tt
term} matches the right side of the equation represented by {\tt
thm} rather than the left side).
In both cases, the rule is applied only at the top level of {\tt
term}; it is not applied to proper subterms.
The prover commands {\tt ruleintro}, {\tt revruleintro} are used to
introduce an application of a theorem (given as the argument of the
command) to the selected subterm of the right side of the current
equation. The prover command {\tt droprule} will eliminate a theorem
application at the top level (this works for all four infixes).
See above for relaxation of stratification rules where embedded
theorem applications are involved.
\subsection{Execution Order of the Tactic Language}
When the {\tt execute} command is issued, all embedded theorems in the
selected subterm of the right side of the current equation are applied
as rewrite rules. The rule is that innermost embedded theorems are
applied first; this includes embedded theorems introduced by the
execution of the tactic interpreter! It means that rewrite rules are
always applied to terms which themselves contain no embedded theorems.
This is only a preliminary picture of the execution order; but it is
needed to explain what is going on below.
The debugger {\tt steps} carries out this process one step at a time,
with some parallelism: it applies all and only innermost embedded
theorems.
\subsection{Semantics of Alternative Rule Infixes}
The semantics of the {\em alternative rule infixes} {\tt =>>} and {\tt
<<=} reflect a need for finer control of the circumstances under
which rules are applied. These should appear only in contexts like
{\tt thm1 =>> thm2 =>> ... thmn =>> firstthm => term}, where the
directions of any arrows that appear can be reversed. Note that the
last rule infix is {\em not\/} an alternative infix!
Once {\tt term} becomes free of embedded theorems in the course of the
execution of the tactic interpreter, an attempt is made to apply each
of the theorems in the list, working from the inside out, until one
applies, after which the remaining theorems linked with alternative
infixes are ignored. Lists of alternative theorems linked with the
standard rule infixes have the undesirable feature that more than one
of the theorems might apply in an unexpected way.
The precise way in which this effect is achieved is that a theorem
embedded via an alternative infix is executed only if the term to
which it is applied is the result of an immediately preceding rewrite
rule application at the top level which failed. Once an application
succeeds, alternative rule applications are ignored (this will be
iterated). This is the reason why the innermost infix must be
standard; there is no preceding rule application to consider.
Theorems embedded with alternative infixes can be introduced using
{\tt altruleintro}, {\tt altrevruleintro}. {\tt altrule} can be used
to toggle between alternative and standard infixes.
There is a theoretical inelegance about the alternative rule infixes;
they probably ought to have the same structure as the ``on-sucess''
infixes discussed in the next subsection (i.e., the construction {\tt
?thm1 =>> ?thm2 => term} ought to be {\tt (?thm1 =>> ?thm2) => term};
the behaviour of the alternative infixes makes the effects of the
tactic interpreters non-local in a way which would be avoided by such
a change. However, non-locality is now pervasive, given variable
binding and local hypotheses, so we are not planning to change this
situation. It is interesting to observe that if the change described
above were made and the precedence of the alternative infixes were set
to 1, there would be no effect on the appearance of terms.
\subsection{Semantics of Other Special Operators}
The operators {\tt *>} and {\tt <*} are binary operators on theorems;
the result of the application of {\tt thm1 *> thm2}, for example, is
that {\tt thm2} is applied (in the usual direction) and then {\tt
thm1} is applied if the application of {\tt thm2} succeeded.
The unary operator {\tt !!} functions as a kind of ``inlining''
operator. The success or failure of application reported by {\tt
!!thm => term} is either failure if {\tt thm} does not apply (this
operator should normally be used with theorems which do not fail) or
the success or failure of the last embedded theorem application at the
top level of the term which results from the application of {\tt thm}.
The typical use of this is in case {\tt thm} is of the form
\begin{verbatim}
?x = (thm1 => thm2 => ... thmn => ?x);
\end{verbatim}
such a theorem is always applicable (and clearly true) and has the
effect of causing a sequence of theorems to be applied;
success/failure of {\tt !!thm} would be success/failure of the
application of {\tt thm1} in this example.
The on-success commands can be used for building guarded lists of
commands to be executed; the inlining operator is provided to make
certain kinds of defined operations on theorems behave sensibly. They
do not interact with each other too well: theorems of the form {\tt
(!! thm2) *> thm1} or {\tt thm2 *> (!! thm1)} do not work in any
useful way, but a theorem of the form {\tt !!(thm2 *> thm1)} will work
as expected. The problem is that the notion of success/failure is
determined in different ways by the infixes {\tt *>} and {\tt <*} on
the one hand and the inlining operator on the other. Using the
criterion used by the guard infixes, the application of an inlined
theorem {\em always} succeeds, though this may not be what the inlined
theorem may report to a following theorem.
An equation {\tt term1 = term2} can be applied as an embedded theorem;
the tactic interpreter will search for a theorem which justifies the
equation, and apply such a theorem if it finds it. This is useful if
one has forgotten the name of a theorem whose form one knows; more
powerful possibilities arise from this, since it allows the tactic
interpreter to construct possible theorems ``dynamically''.
\subsection{Theorems with Parameters}
Theorems may have parameters; this will mean that the theorem name
will be applied using {\tt @} to one or more lists of arguments (a
``list of arguments'' is a substitution instance of an ``argument
list'' as formally defined above) when it appears embedded. These
arguments may represent objects of the theory or other theorems or
tactics. It is possible to prove theorems whose names are operators,
in which case an instance of the theorem will have possibly left and
certainly right arguments and may further be applied to one or more
lists of additional arguments.
Parameterized theorems allow the introduction of information which is
not in the target term. They demonstrate one of the two senses in
which Mark2 supports a higher-order programming language (the other is
explained in the following section).
\subsection{Functional Programming in the Tactic Language}
It is possible to use the {\tt proveprogram} command to bind a theorem
to a constant (representing a function) or operator in the object
language. The theorem must have a form similar to that which a
definition of the constant or operator would have if it had one,
except that variables on the left side can be specialized and new
variables might be introduced. The theorem will then be automatically
applied by the tactic interpreter wherever the constant or operator
appears. The effect is to implement higher order functional
programming in the tactic language.
The unary delay operator {\tt \#} can be used to suppress application
of the theorem bound to the appropriate constant or operator in a term
(if {\tt f} were a function with a bound theorem, {\tt \#(f @ ?x)}
would be a term in which ``execution'' of {\tt f} (application of the
associated theorem) was suppressed; if {\tt **} were an operator to which
a theorem was bound, {\tt \#(?x ** ?y)} would be a term in which its
execution was suppressed. The delay operator is ignored by the
matching function, and if a theorem is successfully applied to the
``delayed'' term, the delay operator is eliminated.
The unary operator {\tt \#\#} implements full laziness; the term to
which it is applied does not have embedded theorems applied at all
unless and until a theorem is applied to the lazy term.
Use of the delay operator proves necessary when the bound theorem has
several steps or cases, and so takes the form of the introduction of a
number of other theorems to be applied in turn or as alternatives to
the original term.
The execution order of conditional expressions of the form {\tt x || T
, U} is that the subterm {\tt x} has all theorem applications carried
out before {\tt T} or {\tt U} is touched; this is a context in which
``laziness'' can be implemented without any use of the delay prefixes.
\subsection{The Special Theorems {\tt RAISE} and {\tt VALUE}}
The built-in theorem {\tt RAISE} implements the semantics of the {\tt :}
unary operation on operators. In the case of ``flat'' operators
(left and right type 0), the effect is to implement all equations like
\begin{verbatim}
(?x@?z) + (?y@?z) = (?x :+ ?y) @ ?z;
\end{verbatim}
\noindent in other words, the effect of the colon operation is to
produce a parallel operator on functions. The effect on operators
with nontrivial relative type is the same in intent, but it is
necessary to adjust types by appropriate introduction of constant
function operators. The best way to find out how this works is by
experiment. The {\tt RAISE} theorem can be applied in the converse
direction as well. The {\tt RAISE} theorem only works on flat
operators (without nontrivial left or right types) in a predicative
theory.
The built-in theorem {\tt VALUE} takes a parameter, and is intended to
be applied to variable-binding contexts: the effect of applying {\tt
VALUE @ [thm]} to {\tt[term]} is the same as the effect of executing
{\tt[thm => term]}. The reasons for the appearance of brackets around
the parameter are technical. Limitations of matching into variable
binding contexts originally forced this tactic to be provided as a
primitive. These limitations no longer obtain; an equivalent theorem
can be proved using limited higher-order matching.
\subsection{The Special Theorems {\tt EVAL} and {\tt BIND}}
These theorems implement abstraction and reduction for variable
binding contexts: {\tt EVAL} converts a term {\tt [?u] @ ?t} by
replacing the appropriate bound variable with {\tt t} in {\tt u} and
``demoting'' other bound variables as necessary. {\tt BIND @ ?t}
(note the parameter) applied to {\tt u} converts it to the form {\tt
[?U] @ ?t} if the result {\tt [?U]} of replacing each occurrence of
{\tt t} with the appropriate bound variable and ``promoting'' other
bound variables as necessary is weakly stratified.
\subsection{Arithmetic Operators}
Operators of unsigned integer arithmetic listed above are applied
whenever both arguments are numerals. The debugger now carries out
arithmetic operations step by step; in earlier versions, it evaluated
arithmetic expressions ``all at once'', a situation which would be
easy to restore.
\subsection{The Special Theorems {\tt INPUT} and {\tt OUTPUT}}
The theorem {\tt OUTPUT} causes the current subterm to be displayed.
It is always regarded as succeeding. A keystroke is necessary to
restart the tactic interpreter after output is read. The sense of the
rule infix used with output (normal or reverse) has no effect on its
function.
The theorem {\tt INPUT} causes the list of current hypotheses to be
displayed (see next section) along with the current subterm (this is
all the information relevant to the effect of the input). The user is
then prompted to input a theorem to be applied to the current subterm.
It is applied using the rule infix used with {\tt INPUT}
\subsection{The Special Theorem {\tt ORDERED}}
The built-in theorem {\tt ORDERED} has no effect on a term to which it
is applied. When it is presented with an infix term, it reports
success when the two top-level subterms are in order (an order is
defined on terms, which is lexicographic on atoms with constants
prededing variables and is determined by the rightmost element of a
complex term) and failure otherwise. When presented with other kinds
of terms, it reports success as a default. The theorem can be applied
in conjunction with commutative laws for operations in the building of
sorting algorithms.
\subsection {The Hypothesis Facility}
The hypothesis facility assigns a special role to expressions of the
form {\tt ?x || ?y , ?z}, which are intended to denote {\tt ?y} if
{\tt ?x} denotes {\tt true} and {\tt ?z} otherwise. In other words,
these are expressions defined by cases.
Expressions with infix {\tt ||} must have second argument a pair, or
they will be rejected by the declaration checking functions of the
prover (the user can create terms which violate this restriction, but
they cannot appear in theorems). {\tt ||} now has some built-in
execution behaviour: if {\tt ?x} evaluates to {\tt true} or {\tt
false} under a tactic interpreter, the case expression will evaluate
to the appropriate one of {\tt ?y} and {\tt ?z}. The subterm {\tt ?x}
is handled by the tactic interpreter before the subterms {\tt ?y} or
{\tt ?z}; conditional expressions can exhibit non-strict execution
behavior.
If the subterm matching {\tt ?x} is actually an equation {\tt ?a =
?b}, then this equation is usable as a theorem in the context
matching {\tt ?y}. Moreover, any case expression with the same
equation in the position matching {\tt ?x} appearing in either the
context matching {\tt ?y} or the context matching {\tt ?z} can be
reduced to one of the alternatives (the left alternative inside {\tt
?y} and the right alternative inside {\tt ?z}).
A theorem of the form {\tt 0 |-| n}, where $n$ is a positive integer,
represents the $n$th hypothesis from the top level of the current
term, in the form of an equation applicable in the context matched by
{\tt ?y} in our generic case expression. A theorem of the form {\tt 1
|-| n} represents the same hypothesis in a form usable to decide case
expressions with the same hypothesis. A theorem of the form {\tt (2
|-| n)@parameter} has the same effect as {\tt 1 |-| n}, except that
when applied in reverse (the only way it is likely to be applied) the
parameter is used as the value in the newly introduced case. Mark2
controls what hypotheses are available in what contexts. The {\tt
lookhyp} and {\tt lookhyps} commands can be used to look at
hypotheses. Theorems with infix {\tt |-|} appearing in theorems will
have their numerals adjusted appropriately on application so that the
theorems reference the intended hypotheses.
\section{The Proof Environment}
The first two subsections give a quick summary of proof and
application of theorems which is useful in reading the previous
section on the tactic langauge.
\subsection{Proving Theorems}
When the current equation is in the desired form, the command {\tt
prove} is issued. The argument of this command will either be a
name for the theorem, in which case the theorem is simply recorded
with that name, or, in the case of a parameterized theorem, it may be
a ``format'' for the theorem, which can take exactly the same forms as
the left side of a definition. In either event, the name of the
theorem (which may in the latter case be an operator) cannot have been
declared already and will automatically be declared as a constant or
operator. Variants on the {\tt prove} command are discussed in a
later section.
Note that the definition facility provides another way to prove
theorems of a special form. The {\tt axiom} and {\tt interaxiom}
commands allow one to introduce axioms.
The {\tt forget} command eliminates a theorem and all theorems which
refer to it or depend on it. It is extremely slow, and it is not
especially reliable.
\subsubsection{Declaring Pretheorems}
It is necessary when ``proving'' recursive or mutually recursive
tactics to introduce embedded instances of theorems which have not yet
been proved. The rule infix introduction commands do check for
existence of theorems, so it is necessary to use the {\tt
declarepretheorem} command to signal the future intention of proving
a theorem with the recursively introduced name.
\subsection{Applying Theorems as Rewrite Rules}
The appropriate side of the theorem is matched to the target subterm
and the format of the theorem is matched to the format of the embedded
instance; these matches have to be consistent with one another. If
everything works, the target term is replaced with the appropriate
instance of the other side of the theorem. New variables may be
introduced by this process; such variables have automatically
generated numerical suffixes to reduce the chance of unintended
identifications of new variables with old ones or other new ones. The
command {\tt initializecounter} reinitializes the counter which
provides the automatically generated numerals.
To enforce some degree of confluence, terms containing rule infixes or
delay operators do not match anything for purposes of theorem
application.
It should also be noted that the tactic interpreters are not applied
to terms representing embedded theorems.
\subsection{Features of the Proof Environment}
The proof environment is a data structure containing the following
elements:
\begin{description}
\item[name:] The environment has a name. This may take the form of a
term serving as the default format for the theorem to be proved
(name and parameters). It is not necessary for the user to set the
name. It is automatically set by certain commands, and it can be
viewed using the {\tt envname} command.
\item[left side of theorem:] This is often but not always the term
originally entered by the user. Assignment commands affect both
sides of the current equation, and it is possible to switch the two
sides using the {\tt workback} command.
\item[right side of theorem:] This is the term being viewed by the
user.
\item[position of selected subterm:] A list of booleans is used to
indicate the selected subterm; an integer representing the number of
nested variable binding contexts in which it is found is also
provided.
\item[current dependencies:] Axioms and definitions on which the
proof so far depends are listed. Such information is maintained as
an element of theorems and tactics as well.
\end{description}
The data structure representing a stored theorem has exactly the
same structure, except that it does not include a ``position of
selected subterm'', and it includes an additional component
supporting the module system described below. The system takes
advantage of this; saved proof environments are kept on the theorem
list, though they cannot be used as theorems.
\subsection{Starting Out}
The basic command for starting out is the {\tt start} command, which
takes the initial version of the left side of the equation as a
parameter. A variant is the {\tt startfor} command, which has two
parameters, the name or format of the theorem to be proved and the
initial term.
The {\tt getleftside} and {\tt getrightside} commands allow one to get
one side of a theorem as the initial term in a new proof session, in
which the name/format environment variable will automatically be set
to the theorem name or format. The {\tt autoedit} command creates a
proof session identical to that one has obtained when one proves the
theorem, and sets the name/format to the name or format of the
theorem.
The {\tt getenv} command allows one to get a proof environment saved
by name; {\tt saveenv} or {\tt backupenv} (the former takes a
name/format as a parameter; the latter uses the current name/format or
a default) allow one to save proof environments on the desktop.
\subsection{Simple Permutations}
The {\tt workback} command interchanges the left and right sides of
the equation. The {\tt startover} and {\tt starthere} commands start
one fresh with the left or right (resp.) side of the former equation
as both sides of the new equation (wiping dependencies).
\subsection{Term Display}
Many commands automatically display the current term and the selected
subterm. One can use the {\tt look} command to do this at any time,
or the {\tt lookhere} command to see just the selected term. The {\tt
lookback} command allows one to view the left side of the equation.
The display of terms uses a scheme of indentation meant to suggest
the precedence structure of the term. This can be fine-tuned in
various ways. The {\tt setline} command will set line length. The
{\tt setdepth} command will set a limit on how deep into the parse
tree subterms will be displayed; the {\tt nodepth} command removes
such a limit.
The precedence of operators can be reset at any time using the {\tt
setprecedence} command.
The current dependency list (axioms and definitions used in the proof
so far) can be seen using {\tt seedeps}.
\subsection{Local Term Manipulation}
The {\tt applythm} and {\tt applyconvthm} commands can be used to
apply a theorem or its converse to the selected subterm without
invoking the command interpreter. This might sometimes be useful for
editing tactics, for example. The {\tt applyenv} and {\tt
applyconvenv} commands allow similar use of saved environments.
The rule introduction commands {\tt ruleintro}, {\tt revruleintro},
{\tt altruleintro} and {\tt altrevruleintro} introduce embedded
theorems (given as parameters) connected with {\tt =>}, {\tt <=}, {\tt
=>>}, or {\tt <<=}, respectively. The command {\tt droprule}
removes an embedded theorem. The command {\tt altrule} toggles an
embedded theorem between standard and alternate versions. The commands
{\tt delay} and {\tt undelay} introduce and remove the special delay
operator {\tt \#}. The commands {\tt lazy} and {\tt unlazy} introduce
and remove the full laziness prefix {\tt \#\#}.
The special rule introduction commands {\tt matchruleintro} and {\tt
targetruleintro} will introduce a theorem (if there is one) which
matches a given equation or which converts the selected subterm to a
given form, respectively.
The {\tt execute} command invokes the tactic interpreter on the
selected subterm. The {\tt onestep} command carries out one step,
executing all innermost embedded theorems in parallel. The {\tt
steps} command does one step each time the enter key is struck.
\subsection{Global Variable Assignment}
There is a spectrum of assignment commands which make substitutions
for variables in a couple of different senses. These commands are
unusual in that they can affect the left side of the current equation.
The most important use of these commands is in assigning values to new
variables introduced by theorem applications; another use for them is
in entering large terms.
The {\tt assign}
command takes two terms as parameters and carries out the
substitutions for variables represented by the match of those two
terms (the simplest application is a statement like {\tt assign ?x
?y+?z}, which would replace all occurrences of {\tt ?x} with {\tt
?y+?z}. Note that substitutions have to be carried out globally
(not just in the selected term) and in both sides of the current
equation!
The {\tt assigninto} command takes as arguments a free
variable term and a term, and produces a new equation which results
from substituting both sides of the current equation for the free
variable parameter in the term parameter; this implements ``doing the
same thing to both sides of an equation''.
Each of these commands can be suffixed with {\tt -it}, which causes
the last parameter to be set to the selected subterm.
The {\tt assignleft} and {\tt assignright} commands allow one to
assign the left or right side of a theorem to a variable.
\subsection{Finishing Up}
The basic command for completing a session is {\tt prove}, which takes
as a parameter the format of the theorem to be proved (the name plus
any parameters).
If the name/format environment variable has been set (this can be
ascertained using the {\tt envname} command), the command {\tt
autoprove} will record the current equation as a theorem with the
format indicated by the environment variable.
The {\tt reprove} command allows one to modify an existing theorem,
replacing it with the current equation. Normally, this is used for
debugging tactics. The only constraint is that the new version of the
theorem cannot have stronger dependencies than the old; if this were
not enforced, the dependency information of tactics which invoke the
edited theorem would be corrupted.
The {\tt autoreprove} variant of this command is often used, because
the {\tt getleftside}, {\tt getrightside}, and {\tt autoedit}
commands, which are often used for editing, all set the name/format
environment variable.
A proof environment does not close down when a theorem is proved; the
current equation remains the same until a new command in the {\tt
start} family is issued. In this case, the current equation is
usually automatically backed up (the {\tt backupenv} command is
invoked).
When a new theorem environment is started up, the attempt to back up
the previous theorem environment will generate an error message
applying to that previous environment if there is something wrong with
it.
\subsection{The Module System}
A theorem has an additional component not found in a proof
environment: this is a list of subsidiary theorems visible only to the
parent theorem. The intended use of this feature is to make it
possible to avoid cluttering the theorem list with subcommands of
tactics.
The {\tt pushtheorem} command allows one to ``hide'' one theorem in
another. It automatically comments the ``pushed'' theorem so that one
can tell where it is (it ceases to be visible on the master theorem
list). The {\tt poptheorem} command allows one to retrieve a theorem,
also posting an automatic comment. Variants {\tt pushtheorem2} and
{\tt poptheorem2} do not post comments; this is useful if one is
taking a subtactic out of a module in order to edit it, for example.
A theorem with subsidiary theorems is referred to as a ``module''.
Modules can be nested. Theorems inside a module are only available
when the parent theorem is being executed; be sure that a theorem
packed into a module is not needed by any theorem other than its
parent theorem or other theorems in the module! Their names are still
reserved.
The {\tt forget} command will eliminate an entire module if any
component depends on the theorem being forgotten. Theorem export will
work correctly in the presence of modules, but it will not export
module structure; exported modules will be completely unpacked.
\section{The Theory Environment}
This section describes the level on which the user interacts with
theorems and declarations in a fixed theory.
\subsection{Components of the Theory Environment}
The theory environment consists of the constant and operator
declaration list, the master theorem list, and some satellite
declaration lists and lists containing syntactical information. A
theory has a name which associates it with a file.
\begin{description}
\item[Precedences:] A list of operator precedences. Parity of the
precedence determines left/right associativity (even gives right,
odd left). Default precedence of all operators is 0.
\item[Default left arguments:] A list of default left arguments for
unary operators.
\item[Declarations:] There is a list of declarations of constants,
operators, and free variable operators (the last only for relative
typing purposes; free variable operators which do not need to be
typed do not need to be declared).
\item[Opacity declarations:] A list of operators declared opaque.
\item[Pretheorems:] A list of names of theorems which are intended to
be proved; needed to make recursive tactics possible while
preserving declaration checking for embedded theorems.
\item[Definitions:] A list of constants and operators with the
theorems which define them; the name of the theorem will be the same
as the name of the defined object, if it is a constant.
\item[Functional programs:] A list of automatic bindings of theorems
to constants and operators, implementing functional programming in
the tactic language.
\item[Theorems:] There is a list of theorems; this also includes
images of current and old saved proof environments. The structure
of an individual theorem parallels the structure of a proof
environment as noted above, with the addition of the list of
theorems supporting the module system. Proof environments saved on
the theorem list lose ``position of selected subterm'' information,
since this is the one item in the structure of a proof environment
that is not found in a theorem.
\end{description}
\subsection{The Syntax Lists}
Use the {\tt setprecedence} command to set precedence of an operator.
Use the {\tt declareprefix} command to set a specific default left
argument for an operator; one can also declare a new unary operator
with {\tt declareunary} or declare an operator implicitly via the
definition or proof commands as a unary operator, in which case it is
automatically assigned a default left argument of {\tt defaultprefix}.
The {\tt showprecs} command will display all precedences, including
the formally redundant left/right associativity information.
Declaration display commands will show default left arguments for
operators which have them.
\subsection{Declarations and Comments}
Constants and operators can be declared using the {\tt
declareconstant}, {\tt declareinfix} (declares flat operators), and
{\tt declaretypedinfix} (declares operators with nontrivial type)
commands.
Constants and operators can also be declared implicitly via the
declaration and proof commands. It should be noted that theorem names
are constants of a theory! When constants are defined, the name of
the defining theorem is the same as the name of the object being
defined; when operators are defined, the name of the defining theorem
must be given as a parameter.
A single declaration can be viewed using the {\tt showdec} command. A
declaration line contains the identifier followed by the left and
right types (both 0 in the case of non-operators) and the default left
argument if there is one. The {\tt showalldecs} or {\tt scandecs}
commands can be used to view all declarations.
Declarations now also include comments. New comments (superseding any
previous comments) can be attached to a declared object using the {\tt
newcomment} command; the {\tt appendcomment} command appends comments
to any previously existing comments. {\tt showdec} and {\tt
thmdisplay} (and all derived commands) display comments.
\subsection{Special Declarations}
The {\tt declarepretheorem} command is used to indicate the intent of
proving a theorem with a given name (this can be an operator).
The {\tt declareopaque} and {\tt declarenotopaque} commands can be
used to enter and remove operators from the opacity list. Free
variable operators can be declared opaque to block them from being
declared with types. New restriction: declareopaque can now be used
only to declare an operator initially; it is still possible to use
declarenotopaque to remove opacity, but this is irreversible.
The {\tt makescin\/} and {\tt makescout\/} commands declare infixes or
functions as having input or output (resp.) restricted to absolute
types (strongly Cantorian sets). The user must supply a witnessing
theorem. These declarations allow the stratification restrictions on
terms involving these infixes to be weakened appropriately, without
the use of explicit type retractions.
The {\tt proveprogram} command can be used to bind a theorem to a
constant or operator as a ``functional program''. The theorem must
have a suitable form (the left side must be a substitution instance of
a term which would be admissable as the left side of a definition of
the constant or operator). The {\tt deprogram} command removes the
binding. The {\tt seeprogram} (for one constant or operator) or {\tt
seeallprograms} commands can be used to view information about this
list.
The {\tt forget} command will eliminate declarations of theorems
depending on or referring to a given theorem.
The definition commands are described above. A single definition can
be viewed using the {\tt seedef} command (it takes the name of the
defined object as parameter and displays the defining theorem); all
definitions can be seen using the {\tt seealldefs} command.
\subsection{Theorems}
Entry of theorems into the theorem list is usually via the proof and
definition commands. Axioms can be entered using the {\tt axiom}
command, which takes name, left side, and right side of the axiom as
parameters, or using the {\tt interaxiom} command, which interprets
the selected subterm of the right side of the current equation as an
equation (so it needs to be one!) and enters it as an axiom with a
name supplied as a parameter.
One views a single theorem using the {\tt thmdisplay} command. One
can view all theorems using {\tt showalltheorems} or {\tt scanthms}.
One can view the module associated with a theorem using {\tt
moddisplay}, or view all modules using {\tt showallmodules}. One can
view a theorem matching a given equation using {\tt showmatchthm}.
One can view a list of theorems applicable to the selected subterm
(excluding theorems applicable to every term) using {\tt
showrelevantthms}. One can view all saved environments using {\tt
showsavedenvs}. One can view all axioms and definitions using {\tt
showallaxioms}.
\subsection{Reaxiomatization and Redefinition}
Theorems are stored with dependency information, a list of the names
of axioms and definitions on which they depend. The dependency
structure of a theory can be completely restructured.
The {\tt makeanaxiom} command can force a theorem to become an axiom;
the {\tt proveanaxiom} command allows one to eliminate an axiom by
exhibiting a proof of it from other axioms, with appropriate global
dependency modifications. These tools can be used to support the use
of lemmas before they are proved.
The {\tt defineprimitive} command will convert a theorem into a
definition of a primitive constant or operator if it has the right
form. The {\tt redefineconstant} or {\tt redefineinfix} command will
replace one definition of a constant or operator (resp.) by another,
with appropriate dependency modifications. The {\tt undefine}
operator converts a definition to an axiom.
The elimination of unneeded dependencies on definitions (due to the
defined object not being mentioned) should be handled automatically by
the proof commands; this is the operational difference between a
definition and an axiom. At the moment, however, performance problems
with the function which removes unwanted dependencies on definitions
make it necessary to remove all automatic applications; the user may
apply the temporary {\tt reallyremovedefdeps} command to effect such
reductions of dependencies manually.
\section{The Desktop Environment}
This section discusses the management of theories and environments
other than the current one, whether saved on the desktop or in files.
It also discusses theory interpretations and their use in the theorem
export facility which allows results proved in one theory to be
exported to another.
\subsection{Loading and Saving Theories}
Theory files are saved with a fixed extension (currently {\tt .wthy}).
They are script files written using commands of the alternative
interface invoked by the {\tt walk} command which reconstruct all
information in the theory.
There is an environment variable representing the name of the current
theory, which can be read using the {\tt theoryname} command. The
{\tt storeall} command sets the theory name to its parameter and
stores the theory in a file with name obtained by adding the extension
to the theory name. The {\tt safesave} command saves a theory to the
file whose name is indicated by the current theory name variable. The
{\tt load} command sets the theory name environment variable to its
parameter and loads the appropriate theory file (automatically backing
up the current theory on the desktop).
The {\tt clear} command will clear the current theory; the {\tt
cleartheories} command will clear all theories from the desktop.
\subsection{Proof Environment Management}
Proof environments can be saved on the desktop while working in a
particular theory. When a theory is saved on the desktop, all
associated environments are saved with it (this is also true when
theories are saved to files, since environments are stored on the
theorem list).
The {\tt saveenv} command allows one to save a proof environment,
specifying a name/format to be associated with that environment. The
{\tt backupenv} command sets the name/format of the saved environment
to the current name/format environment variable if set by the user or
to {\tt backup}. The {\tt getenv} environment loads a saved
environment; it will issue a warning if this environment was found
only on the theorem list and not on the current desktop. The {\tt
getenv} command and most commands of the {\tt start} family
automatically invoke {\tt backupenv}. An exception is if one is
actually using {\tt getenv} to retrieve the backup of the current
environment!
{\tt dropenv} will delete an environment (from the theorem list and
the desktop) and {\tt clearenvs} will eliminate all saved
environments. {\tt loadsavedenvs} allows one to load all saved
environments in a theory file onto the desktop (one must hit return to
load each such environment as in the thmdisplay family of commands);
this can be useful if one wants to clear all such environments without
loading them onto the desktop individually.
\subsection{Theory Management}
A theory is always associated with the name of a file.
Theories may be saved on the desktop using {\tt backuptheory}. They
may be retrieved using {\tt gettheory}. The current theory may be
cleared using {\tt clear} and all theories may be cleared using {\tt
cleartheories}. {\tt load} and {\tt gettheory} automatically invoke
{\tt backuptheory}, unless one is using {\tt gettheory} to retrieve the
backup of the current theory.
A theory saved on the desktop is saved with all associated proof
environments.
\subsection{Views and Theorem Export}
A {\tt view} is an interpretation of a subset of a source theory in a
target theory, implemented as a set of translations of names of
constants and operators. Views reside in the source theory; they are
saved in theory files.
Views are constructed using the {\tt declareview} command, which
creates a trivial view with the predeclared constants and operators
interpreted as themselves. They can be edited using the {\tt
viewasin}, {\tt viewasselfin}, and {\tt dropfromview} commands.
Views are used to export theorems from the theory in which they reside
(the source theory) to another theory (the target theory). The target
theory needs to be saved on the desktop. The theorem export functions
{\tt exportthmlist} and {\tt exportthm}, given a view, information
about default prefixes to add to automatically generated names of
constants and operators, a single theorem or list of theorems to be
exported, and a target theory, will retrieve the target theory from
the desktop, automatically check the validity of the view and extend
it if necessary (the view must at lest include interpretations of all
axioms and definitions on which the exported theorems depend, and
often does not need to include more than this; the rest can usually be
deduced), and generate analogous theorems in the target theory,
guaranteed to be valid theorems of the target theory. The facility
will automatically avoid name conflicts by adding additional prefixes
to automatically generated names. If a tactic is exported, it will
automatically determine what other theorems need to be exported and
translate them as well; complex mutually recursive systems of tactics
can be exported reliably. A reason to extend views beyond the minimal
level of axioms and definitions is to avoid the copying of frequently
used theorems found in many theories.
The theorem export commands can be finicky about defined notions in
the source theory; these need to match defined notions in the target
theory with analogous definitions. The redefinition facilities might
be useful in avoiding problems with this.
The theorem export commands do not export module structure; they will
export modules but they are unpacked completely.
\section{Places where bugs are likely to be found}
The theorem export facility is very complex and has not been tested
under all possible perverse conditions; it is likely to be reliable
but not perfectly reliable. Problems I am aware of involve the
treatment of numerals!
The variable binding context is a new addition (originally, the
bracket construction was used only for constant functions and there
were no bound variables at all). I suspect it of having bugs, and I
also suspect that I will find awkwardnesses in it which will require
me to revise it somewhat.
\section{Logical and Programming Style}
\subsection{Logical Style}
The Mark2 prover has been affected from the outset by certain
decisions of {\em logical style\/}. These are motivated by the
theoretical origins of the project in untyped synthetic combinatory
logic, motivated by Curry's program of untyped combinatory logic
without bound variables on the one hand, and the untyped set theory
``New Foundations'' ({\em NF\/}) of Quine on the other.
\subsubsection{The Main Points}
\begin{description}
\item[Strictly equational:] All reasoning under Mark2 is manipulation
of equations using equations, though this may be automated in ways
which can produce surprising effects. In particular, there is
nothing precisely corresponding to the usual style of reasoning in
propositional or predicate logic. Recently, we have been encouraged
in this approach by the work of Gries, Schneider and Cohen on
teaching an equational style of logic.
\item[Avoidance of bound variables:] Originally, Mark2 did not use
bound variables at all. It is still the case that Mark2's variable
binding facility is a sort of window dressing on top of the
synthetic abstraction facility supported, for example, by the
built-in theorem {\tt RAISE}. The motivation for this comes out of
the program of Curry, as well as work of Quine, Tarski and Givant,
and others, on ways to formalize first-order logic while avoiding
the use of variables. We were interested at the outset in the
question of how much these theoretically effective but practically
awkward approaches could be made more usable. A great deal of
development was successfully carried out with no use of bound
variables at all, and we feel that the early development of the
prover benefited from the simplicity afforded by their absence, but
our conclusion has been that bound variables have their place, both
as having actual practical advantages and as making the prover's
notation more familiar to hoped-for users. The current
variable-binding system uses de Bruijn levels; users seem to be able
to handle this.
\item[Avoidance of types:] This is a feature both of the systems of
Curry and of Quine's set theory. In fact, the original logic of the
prover (in an earlier version) was a synthetic combinatory logic
equivalent to first-order logic on an infinite universe, which {\em
did\/} have distinctions of sort. Moreover, the use of ``New
Foundations'' was at first avoided, although the prover project grew
out of an investigation of systems of combinatory logic equivalent
to {\em NF\/}. Considerations of polymorphism between the two sorts
of functions present in the original logic led us to abandon that
logic (basically a first-order logic) and adopt a higher order logic
equivalent to a consistent subset of {\em NF\/} (the consistency of
full {\em NF\/} remains an open question). A special state of the
prover implements a predicative version of {\em NF\/} which, while
technically a higher-order logic, is actually weaker than
first-order arithmetic; one regains full higher order logic upon
issuing the {\tt impredicative} command. The appearance of {\em
NF\/} was not obvious; the only role it played in the earliest
version of the prover using higher-order logic was to provide the
stratification criterion for parameterized function definitions.
The use of stratification had the desired immediate effect of
providing complete polymorphism with respect to commonly used
operations on functions.
We now have the additional feature of built-in type retractions onto
``strongly Cantorian sets'', which allow subversion of the
stratification restrictions when one is working in ``small''
domains.
The original logic had a base sort containing all objects of the
theory under consideration and a second and third sort consisting of
functions on the objects of the base type and operations on these
functions. All of these sorts were identified when the stratified
higher-order logic was adopted. It cannot be said that the
resulting identification of the domain of objects of a theory with
domain of operators on those objects (only the type level ones,
though) has been entirely convenient, but it has not proved wholly
inconvenient, either. The lack of obligatory type notation is quite
liberating (though it necessitates care). Type labelling can be
introduced (use retractions as type labels) and it appears that
typing schemes of this sort are natural objects for manipulation
by the tactic language.
It should be noted that it is still not the case that all operators
are first-class citizens of the theory in which they are used; only
operators with trivial left and right type can be expected to
correspond to functions of the theory.
\end{description}
\subsubsection{The Logic of an Old Version}
Some difficulties arise from each of these points of style. The
solutions to these difficulties, and even the precise nature of the
difficulties themselves, are best appreciated by examining the theory
{\em EFT\/} (``external function theory'') which was implemented in
the original version of this theorem prover (the following is
excerpted from an unpublished paper).
{\em EFT\/} is an equational theory. There are two (apparent) sorts,
called objects and Functions. Object variables and constants will
begin with lower-case letters; Function variables and constants will
begin with upper-case letters. The ``intended interpretation" is that
the objects are the elements of some infinite set and the Functions
are the functions from that set into itself (or a subset of the
collection of such functions closed under certain operations). But
see below for an alternate interpretation.
$t$ and $f$ are distinct atomic object constants (used to represent
the truth values). If $x$ and $y$ are object terms, $(x,y)$ is an
object term, the ordered pair with projections $x$ and $y$. If $F$ is
a Function term and $x$ is an object term, $F[x]$ is an object term,
the value of $F$ at $x$ or the result of application of $F$ to $x$.
Cond and Id are atomic Function terms. Id is intended to be the
identity function; Cond$[(x,y),(z,w)]$ is intended to be $z$ if $x =
y$, $w$ otherwise. If $x$ is an object term, $|x|$ is a Function term,
the constant Function of $x$. If $F$ and $G$ are Function terms,
$(F,G)$ is a Function term, the product of $F$ and $G$, and
$F\left$ is a Function term, the composition of $F$ and $G$.
If $F$ is a Function term, $F!$ is a Function, called the ``Hilbert
Function" of $F$. $F![y]$ is intended to be an object such that
$F[F![y],y]$ is not $t$, if there is any such object; its value is a
matter of indifference otherwise. We write $F[x,y]$,
$F\left$, instead of $F[(x,y)]$, $F\left<(G,H)\right>$,
respectively. We define the $n$-tuple $(x_{1},x_{2},...,x_{n})$
inductively as $(x_{1},(x_{2},...,x_{n}))$. Sentences of {\em EFT\/}
are equations between object terms. The rules of {\em EFT\/} enable
substitutions of equals for equals:
\begin{description}
\item[A.] Reflexivity, symmetry, transitivity of equality.
\item[B.] If $a = c$, $b = d$ are theorems, $(a,b) = (c,d)$ is a theorem.
\item[C.] If $a = b$ is a theorem, $F[a] = F[b]$ is a theorem.
\item[D.] Uniform substitution of an object term for an object
variable or of a Function term for a Function variable in a theorem yields a theorem.
\end{description}
Note that the rules do not directly permit substitutions of equals for
equals where object terms appear as subterms of Function terms. The
axioms are as follows:
\begin{description}
\item[(CONST)] $|x|[y] = x$
\item[(ID)] Id$[x] = x$
\item[(PROD)] $(F,G)[x] = (F[x],G[x])$
\item[(COMP)] $F\left[x] = F[G[x]]$
\item[(PROJ1)] Cond$[(x,x),(y,z)] = y$
\item[(PROJ2)] Cond$[(t,f),(y,z)] = z$
\item[(DIST)] $F[$Cond$[(x,y),(z,w)]$ = Cond$[(x,y),(F[z],F[w])]$
\item[(HYP)] Cond$[(x,y),(F[x],z)]$ = Cond$[(x,y),(F[y],z)]$
\item[(HILBERT)] Cond$[(F[F![y],y],t),(F[x,y],t)] = t$
\end{description}
It should be clear that the axioms are true in the intended
interpretation (under the Axiom of Choice), and they should also serve
to clarify the exact interpretations of the term constructions. If a
version of the ``intended interpretation" with a restricted class of
functions interpreting the Functions of the theory is to be
constructed, the axioms indicate the set of operations under which the
restricted class of functions needs to be closed.
We have the following Abstraction Theorem:
\begin{description}
\item[Theorem:] If $s$ is an object term and $x$ is an object variable which does
not appear as a subterm of any Function subterm of $s$, there is a
function term $(\lambda x)(s)$ such that $x$ does not appear as a subterm of
$(\lambda x)(s)$ and ``$(\lambda x)(s)[x] = s$" is a theorem.
\item[Proof:] By induction on the structure of terms. If $s$ is $x$, $(\lambda x)(s)$ is
Id; if $s$ is an atom $a$ distinct from $x$, $(\lambda x)(s)$ is $|a|$. If $s$ is of the
form $(u,v)$, $(\lambda x)(s) = ((\lambda x)(u),(\lambda x)(v))$. If $s$ is of the form $U[v]$, $U$
does not involve $x$ and $(\lambda x)(s) = U\left<(\lambda x)(v)\right>$.
\end{description}
$(\lambda xy)(s)$ such that ``$(\lambda xy)(s)(x,y) = s$" is a theorem can be
defined as $(\lambda z)(s_{0})$, where $z$ is a variable not appearing in $s$ and $s_{0}$
is the result of replacing $x$ with Cond$[(t,t),z]$ and $y$ with
Cond$[(t,f),z]$ wherever they appear in $s$. $(\lambda
z)($Cond$[(t,t),z])$ and $(\lambda z)($Cond$[(t,f),z])$ are the
projection Functions $\pi_1$ and $\pi_2$.
An interesting point about abstracts constructed following the proof
of the Theorem is that they resemble the term from which they are
abstracted in structure. This helps to make {\em EFT\/} an
environment in which it is practical to avoid the use of bound
variables.
\subsubsection{Implementation of {\em EFT\/}}
The precursor of Mark2 was EFTTP ({\em EFT\/} Theorem Prover) which
had certain points of resemblance to Mark2 along with notable
differences.
The notation of EFTTP followed the term structure of {\em EFT\/}
exactly, with the addition of special term constructors for FUNCTORS
(operations on Functions, the third sort) and the attachment of
embedded theorems. The notation of EFTTP was very awkward; it did not
support infix notation at all, for example (the current {\tt ?x + ?y}
would be {\tt +[x?,y?]} under EFTTP).
The avoidance of bound variables was made practical by the direct
implementation of the Abstraction Theorem of the preceding section as
a tactic. Parameterized tactics were not at that time available, but
I will describe the {\tt ABSTRACT} tactic in its modern form, as it
appears in Mark2, rather than delving into the peculiar expedients of
EFTTP. The tactic is invoked in the following format {\tt (ABSTRACT @
term1) => term2}, and execution of the tactic yields a term {\tt
function\_term @ term1}, in which, if {\tt term2} can be expressed as
a function of {\tt term1} under appropriate constraints (such as
stratification; obviously a different abstraction theorem is being
implemented than in EFTTP!), the term {\tt function\_term} will not
depend on {\tt term1}. Moreover, as noted above, both in EFTTP and in
Mark2, the algorithm implemented by {\tt ABSTRACT} produces more or
less readable terms parallel in structure to the terms from which they
are abstracted, though not as readable as conventional
$\lambda$-terms. The implementation of {\tt ABSTRACT} is a subject
for the section on programming style (NOTE TO BE REMOVED: be sure to
note there why {\tt RAISE\/} is needed; the problem of abstraction
from operators). A tactic {\tt REDUCE}, which needs no parameter,
undoes the effect of {\tt ABSTRACT}. The two tactics together allowed
the automation of operations of global substitution under the prover,
which made it possible to go quite far without any use of bound
variables.
The Cond$[(x,y),(z,w)]$ construct of {\em EFT\/} allows the
construction of case expressions. The analogous construction under
Mark2 is {\tt (?x = ?y) || (?z , ?w)} (the second set of parentheses
would normally not appear). All reasoning under hypotheses is managed
in EFTTP or Mark2 as equational reasoning about case expressions. The
axioms (DIST) and especially (HYP) manage the system of hypothetical
reasoning of {\em EFT\/}, which is adequate to the demands of
propositional logic. In fact, a complete tautology checking tactic
has been developed from the {\em EFT\/} axioms under both provers.
Abstraction and reduction play an important role in the development of
tactics which support this style of reasoning efficiently; the
substitutions for arguments of functions in the axioms (DIST) and
(HYP) can be converted to global substitutions in expressions of
arbitrary form by the use of abstraction followed by reduction (the
user never sees an abstraction term).
Reasoning about case expressions is implemented in a ``hard-wired''
way in the new hypothesis facility (see above).
The paper from which the description of the {\em EFT\/} logic in the
previous subsection is drawn proves that {\em EFT\/} theories are
precisely equivalent to first-order theories with infinite domain (the
pairing forces the infinite domain).
The axiom (HILBERT) introduces an operator analogous to the Hilbert
epsilon operator for defining quantifiers (HOL has a similar approach
to defining quantifiers). A choice operator could be used in a
similar way in Mark2's logic; if it is desired that the Axiom of
Choice not be assumed, this can be made possible by declaring the
choice operator as ``opaque'' to abstraction.
\subsubsection{The Logical Preamble}
The prover automatically loads the declarations of certain logical
operators and axioms governing these operators. Most of these are
derived from axioms of {\em EFT\/}. They can be examined by starting
the prover and looking at the axioms that are present before any
theory is loaded.
\subsection{Programming Style}
The development of the tactic language has largely been driven by the
requirements of the {\tt ABSTRACT} and (to a lesser extent) {\tt
REDUCE} tactics. An outline of the development of some simple example
programs culminating in the development of limited (predicative)
abstraction and reduction tactics and an introduction to some of their
applications seems natural as an introduction to programming in the
tactic language. These tactics are no longer used in practice, with
the introduction of variable binding and the {\tt BIND} and {\tt EVAL}
built-in tactics, but tactic programming remains useful.
\subsubsection{Basic Control Structures}
\begin{description}
\item[Sequencing:] The term {\tt thm\_5 => thm\_4 => thm\_3 => thm\_2 =>
thm1 => term}, when ``executed'' will apply the theorems or
tactics numbered 1 through 5, in that order, to the original term.
\item[Alternation:] The term {\tt thm\_5 =>> thm\_4 =>> thm\_3 =>> thm\_2
=>> thm\_1 => term}, when ``executed'', will apply the first of the
theorems or tactics numbered 1 through 5 (as above, in that order,
working from the inside out) which applies to the term. No more
than one of the tactics will be applied. In principle, the
sequencing construction could be used for this as well, if one were
certain that applying one of the tactics would not give a form to
which one of the later tactics in the list might unexpectedly apply.
\item[Conditionals:] All conditional execution in the tactic language
depends on the fact that a theorem is not applied if it does not
match its target. The termination of recursive tactics, for
example, depends on this.
\item[Guarded commands using {\tt *>} and {\tt <*}:] The term {\tt
(thm\_n *> guard\_n) =>> ... (thm\_1 *> guard\_1) => term} will apply
the first of the theorems {\tt guard\_i} which apply to the term, and
then will further apply {\tt thm\_i}. An advantage of this is that
{\tt thm\_i} might be constructed using sequencing on a very general
term, and so might apply under much more general circumstances than
those specified by applicability of {\tt guard\_i}; before this
construction was available, it would have been necessary to
construct a new theorem with the effect of each composite {\tt thm\_i
*> guard\_i}.
\end{description}
\subsubsection{Recursion}
The real power of the tactic language arises from the possibility of
proving recursive or mutually recursive tactics. A simple example is
an expansion tactic for algebra:
Suppose we have the following theorems available:
\begin{verbatim}
COMM: ?x * ?y = ?y * ?x
DIST: ?x * (?y + ?z) = (?x * ?y) + (?x * ?z)
COMMDIST: (?x + ?y) * ?z = (?x * ?z) + (?y * ?z)
(* COMMDIST is obviously provable from axioms COMM and DIST *)
\end{verbatim}
We declare a pretheorem {\tt X} and prove the following theorems
(tactics):
\begin{verbatim}
X1: ?x + ?y = (X => ?x) + (X => ?y)
X2: ?x * ?y = X1 => COMMDIST =>> DIST => (X => ?x) * (X => ?y)
\end{verbatim}
The tactic {\tt X1} is used to expand sums: it can be summarized by
``expand each term''.
The tactic {\tt X2} is used to expand products: it can be summarized
by ``expand each factor, then apply the first appropriate of
distributivity from the left and the right, then expand each term if
the result is a sum''.
After proving these, the theorem {\tt X} is anticlimactic:
\begin{verbatim}
X: ?x = X1 =>> X2 => ?x
\end{verbatim}
All that this tells us to do is to apply whichever of {\tt X1} and
{\tt X2} might be appropriate. Note that {\tt X} applies successfully
to any term whatever; if one wanted to restrict circumstances under
which {\tt X} would be applied, one might want to use the {\tt *>} or
{\tt <*} operator.
\subsubsection{Abstraction and Reduction Algorithms}
In this section, verbatim text from prover output is presented. The
reader should remember that default precedence under Mark2 is equal
for all operators, and that all operators associate to the right.
We present a limited abstraction tactic. This tactic {\tt ABSTRACT}
has the form
\begin{verbatim}
ABSTRACT @ ?x:
?y =
(ABSCONST @ ?x) =>> (ABSINFIX @ ?x)
=>> (ABSCOMP @ ?x) =>> (ABSID @ ?x) => ?y
\end{verbatim}
The tactic takes as a parameter the term with respect to which we will
abstract, and applies a tactic taken from a list of alternatives (each
of which inherits the parameter.
\begin{verbatim}
ABSID @ ?x:
?x =
Id @ ?x
\end{verbatim}
The only thing which distinguishes the {\tt ABSID} subtactic from the
converse of the defining axiom of the identity function {\tt Id} is
the presence of the parameter, which ensures that only the term from
which we are abstracting will have this subtactic applied to it.
\begin{verbatim}
ABSCOMP @ ?x:
?f @ ?y =
COMP <= ?f @ (ABSTRACT @ ?x) => ?y
COMP:
(?f @@ ?g) @ ?x =
?f @ ?g @ ?x
\end{verbatim}
We display the {\tt ABSCOMP} subtactic and the defining axiom {\tt
COMP} of the function composition infix {\tt @@} on which it
depends. The recursive application of {\tt ABSTRACT} should convert
{\tt ?y} to something of the form {\tt ?Y @ ?x}, and application of
the converse form of {\tt COMP} to {\tt ?f @ ?Y @ ?x} will give {\tt
(?f @@ ?Y) @ ?x}. Note that it is assumed that {\tt ?x} does not
appear as a subterm of the function {\tt ?f}; this restriction is
essentially the same as the restriction on abstraction in {\em EFT\/}.
It is possible to write more powerful abstraction tactics, but this
one has proved adequate for most applications we have worked on.
\begin{verbatim}
ABSINFIX @ ?z:
?x ^+ ?y =
RAISE => ((ABSTRACT @ ?z) => ?x)
^+ (ABSTRACT @ ?z) => ?y
\end{verbatim}
The built-in theorem {\tt RAISE} is described elsewhere in this
documentation. If {\tt \^{ }+} is assumed to have trivial relative
types, {\tt RAISE} has the effect of the theorem {\tt RAISE0: (?f @
?x) \^{ }+ (?g @ ?x) = (?f :\^{ }+ ?g) @ ?x}. The theorem has more
complicated effects on operators which have nontrivial relative types.
The reason why the theorem {\tt RAISE} is needed is that operators (at
least, operators with nontrivial type information) are not first-class
objects in the logic of Mark2, and abstraction with respect to
operators presents logical difficulties. The theorem {\tt RAISE0}
above can now be stated as an axiom or proved as a theorem using {\tt
RAISE}, but even this kind of abstraction was not possible until we
permitted the declaration of operator variables with fixed type
information.
The effect of the recursive applications of {\tt ABSTRACT} should be
to convert the whole term to the form {\tt RAISE => (?X @ ?z) \^{ }+
(?Y @ ?z)}, and application of {\tt RAISE} will convert this to the
form {\tt (?X :\^{ }+ ?Y) @ ?z}.
The three subtactics above handle the cases where we see the term from
which we are abstracting by itself, or where we see a function
application or an infix term. In all other cases, we use a constant
function as our abstraction:
\begin{verbatim}
ABSCONST @ ?x:
?y =
[?y] @ ?x
\end{verbatim}
This tactic differs from the converse of the defining axiom for
constant functions only in having a parameter. Its effect is to
express {\tt ?y} as a function of {\tt ?x} in a trivial way. The
alternative control structure is very useful here, since this
subtactic applies to any term at all! The abstraction tactics written
before alternative rule infixes were available applied this subtactic
first, then looked at the result and reversed it where it could be
recognized that the other cases applied; these algorithms were a
little harder to understand!
The execution of the term {\tt (ABSTRACT @ ?x) => ?t} always has the
effect of producing a term of the form {\tt ?T @ ?x}; for a wide class
of terms {\tt t} the term {\tt T} will not contain any instance of the
term {\tt ?x}. The term matching {\tt ?x} does not have to be a
variable!
We present a reduction algorithm which ``undoes'' the abstraction
algorithm above.
\begin{verbatim}
REDUCE:
?x =
((LEFT @ REDUCE) *> (RIGHT @ REDUCE) *> RAISE)
<<= ((RIGHT @ REDUCE) *> COMP) =>> CONST =>> Id
=> ?x
\end{verbatim}
This algorithm has simpler recursion than the {\tt ABSTRACT}
algorithm, but it involves the use of more sophisticated control
structures. The theorems {\tt Id} and {\tt CONST} (not exhibited)
have the effect of applying identity and constant functions
respectively. The theorem {\tt COMP} is exhibited above (the
definition of function composition). The effect of the {\tt *>}
operator is to cause (in the first instance) the theorem {\tt RIGHT @
REDUCE} to be applied as well when {\tt COMP} is applicable (this is
the ``guarded command'' format described above). We examine the
parameterized theorem {\tt RIGHT}:
\begin{verbatim}
RIGHT @ ?th:
?x ^+ ?y =
?x ^+ ?th => ?y
\end{verbatim}
This theorem takes a theorem {\tt ?th} as a parameter, and applies it
to the right subterm of the term to which the parameterized theorem is
applied. This is useful when the subterm to which this theorem is to
be applied does not yet exist in the term to which the parameterized
theorem is applied (in this case, the subterm in question is to be
created by the application of {\tt COMP}).
A segment of output of the tactic debugger is exhibited to illustrate
how this works:
\begin{verbatim}
((LEFT @ REDUCE) *> (RIGHT @ REDUCE) *> RAISE)
<<= ((RIGHT @ REDUCE) *> COMP) =>> CONST =>> Id
=> (?f @@ Id) @ ?x
(RIGHT @ REDUCE) => ?f @ Id @ ?x
?f @ REDUCE => Id @ ?x
\end{verbatim}
The parameterized theorem {\tt LEFT} used above is exactly analogous
in form to {\tt RIGHT}, except that it operates on left subterms. A
related theorem is the theorem {\tt \$}, (used below) exhibiting the possibility of theorems whose names
are operators. This theorem has the effect of applying the theorem
which is its argument in its ``converse'' form:
\begin{verbatim}
$?th:
?x =
?th <= ?x
\end{verbatim}
In connection with proving theorems like {\tt LEFT}, {\tt RIGHT}, and
{\tt \$}, it is useful to note that the rule introduction commands
will accept variables as theorems!
The effect of the complex theorem {\tt ((LEFT @ REDUCE) *> (RIGHT @
REDUCE) *> RAISE)}, applied in the reverse sense, which is the last
alternative in {\tt REDUCE} is to apply the converse of {\tt RAISE}
(if it is applicable) and follow this by reducing the left and right
subterms of the resulting term.
The use of guarded command format eliminates the necessity in earlier
versions of special theorems for each of the separate cases.
As an example of the application of abstraction and reduction
algorithms, we present a tactic which implements the axiom (HYP) of
{\em EFT\/} in a very powerful form:
\begin{verbatim}
PIVOT:
(?a = ?b) || ?x , ?y =
(RIGHT @ LEFT @ REDUCE) => HYP => (?a = ?b)
|| ((ABSTRACT @ ?a) => ?x) , ?y
\end{verbatim}
The effect of this tactic is to substitute the term matching {\tt ?b}
for the term matching {\tt ?a} throughout the term matching {\tt ?x}.
The effect of abstraction is to convert {\tt ?x} to something like
{\tt ?X @ ?a}. The theorem {\tt HYP} from the logical preamble can
then be applied:
\begin{verbatim}
HYP:
(?a = ?b) || (?f @ ?a) , ?c =
(?a = ?b) || (?f @ ?b) , ?c
\end{verbatim}
and the reduction algorithm (directed to the appropriate subterm)
cleans up. The user never sees an abstraction term:
\begin{verbatim}
PIVOT => (?a = ?b) || (?a & ?a) , ?b
val it = () : unit
- execute();
{(?a = ?b) || (?b & ?b) , ?b}
\end{verbatim}
The use of theorems like {\tt PIVOT} may be largely superseded by the
new facility for reasoning under hypotheses (see above), but it is
still an instructive programming example.
We exhibit abstraction and reduction theorems which use the new
variable binding capabilities of Mark2: these algorithms actually use
the synthetic abstraction and reduction algorithms in conjunction with
the built-in theorem {\tt VALUE} (documented here) and the axiom {\tt
BETA} also given (note the use of the theorem {\tt \$} described
above):
\begin{verbatim}
BETA:
[?f @ ?1] =
?f
BETAABSTRACT @ ?x:
?y =
(LEFT @ VALUE @ [REDUCE]) => (LEFT @ $BETA)
=> (ABSTRACT @ ?x) => ?y
BETAREDUCE:
?f @ ?x =
REDUCE
=> (BETA => (VALUE @ [ABSTRACT @ ?1]) => ?f) @ ?x
\end{verbatim}
When working with variable binding contexts, it is useful to remember
that there are strong restrictions on the conditions under which free
variables of Mark2 can match terms containing bound variables.
It is useful to be aware that the new theorems {\tt EVAL} and {\tt
BIND} implement abstraction and reduction directly for functions
expressed by variable binding contexts; use of the bound-variable-free
abstraction features of the prover is now entirely optional.
\section{Command Reference}
This is a description of the theorem prover on an abstract level
organized around data types. The order is top-down.
This is an older layer of documentation and its consistency with the
remarks above may not be complete; it also may not refer to all
commands referenced above. I will be working on remedying this.
\subsection{Command Line}
The interface to the prover is a command line. Each command consists
of a command name followed by a number of arguments. The number and
type of arguments is determined by the specific command and is never
variable. The types of arguments which occur are strings, integers,
and terms of the internal language. In the ML interface, string and
term arguments are both handled as strings and so enclosed in double
quotes; term arguments need special delimiters, as they may include
significant spaces. There is a variant command line interface with
single letter commands and arguments separated by tabs in the current
ML version; it is invoked by the ``walk'' command. Command names in
the variant interface can be seen by typing ``h'' in that interface.
Another command line interface, very similar to the ML interface, is
invoked by the {\tt noml} command; this uses the same file parser used
by the {\tt script} command.
The standard interface of the current prover is actually the ML
interpreter; it is useful to be aware that ML requires all arguments
except integers to be enclosed in double quotes, uses tilde for
integer negation, requires an argument () for commands with no
parameters, and requires that each line end with a semicolon. See the
tutorial for examples.
\subsection{List Scanning Commands}
This subsection describes a family of commands for viewing various
kinds of lists in the prover. Each of these commands puts one in an
environment with the following one-letter commands:
\begin{description}
\item[l] ``left'' -- go to the item with alphabetically previous
label.
\item[r] ``right'' -- go to the item with alphabetically next label.
\item[Enter] view this item.
\item[q] quit.
\item[h (or any other key)] ``help'' -- see a list of commands.
The commands available include {\tt scandecs} (declarations), {\tt
scanprograms} (functional program bindings), {\tt scantheorems}
(theorems), {\tt scanenvs} (saved proof environments), {\tt
scantheories} (theories on desktop).
\end{description}
\subsection{Command Abbreviations}
These are abbreviations for names of commonly used commands found below.
\begin{verbatim}
(* /////////// ml declarations to change the names of commands //////////// *)
fun ri x = ruleintro x;
fun rri x = revruleintro x;
fun ari x = altruleintro x;
fun arri x = altrevruleintro x;
fun s x = start x;
fun p x = prove x;
fun rp x = reprove x;
fun ex() = execute();
fun di x = declareinfix x;
fun du x = declareunary x;
fun a x = axiom x;
fun dpt x = declarepretheorem x;
fun sp x n = setprecedence x n;
fun td x = thmdisplay x;
fun wb() = workback();
fun ae x = autoedit x;
fun tri x = targetruleintro x;
fun smt x = showmatchthm x;
fun srt() = showrelevantthms();
fun dti s t = declaretypeinfo s t;
fun uti s = detypeinfo s;
fun sat() = showalltheorems();
\end{verbatim}
\subsection{Desktop}
A desktop consists of several workspaces on the desktop and one
current workspace. The workspaces on the desktop may include a
version of the current workspace. Each workspace is associated with a
theory (there can be no more than one workspace associated with each
theory); the name of a workspace (unless it is ``scratch'') will be
the name of a saved theory file.
The new commands {\tt versiondate} and {\tt script} are treated as
desktop commands more or less by default, as are some recently
introduced commands for controlling behavoir of scripts.
Desktop commands include
\begin{description}
\item[versiondate:] Displays the date of the version of Mark2 being used.
\item[script $<$string$>$:] This command has the effect of the ML {\tt
use} command, but adds the extension {\tt .mk2} to its argument.
Files with this extension are to be expected to be ML proof scripts.
The {\tt script} command no longer uses the ML interpreter; a new file
parser has been writtem.
\item[setpause, setnopause:] {\tt setpause} causes the prover to pause
whenever it issues an error message. This is useful when running
proof scripts. {\tt setnopause} reverses this effect.
\item[bequiet, speakup:] {\tt bequiet} suppresses all output usually
seen from proof scripts, except error messages. {\tt speakup}
reverses this effect, which is necessary if normal prover function is
to be restored.
\item[noml:] Invokes a new interface which uses the same file parser
as the {\tt script} command. Commands in this interface have the same
format as in the ML interface, as long as the new file parser is
familiar with the command. The () (type unit) parameter to commands
which don't have a real parameter is optional with this interface; the
null {\tt quit} command exits this interface (and must end scripts to
be run by the {\tt script} command).
\item[demoline $<$string$>$:] Used to make comments with pauses in
demo scripts.
\item[initializecounter:] Sets the new variable counter to zero. Useful
in scripts, as otherwise names of new variables prove unpredictable.
\item[storeall $<$string$>$:] Save the current workspace to a theory file
whose name is the argument with an extension added (currently
``.wthy'').
\item[safesave:] Stores the current workspace to a file; determines the
argument from the name of the workspace.
\item[load $<$string$>$:] Load a saved theory from the theory file indicated
by the argument, backing up the current theory onto the desktop.
\item[clear:] Set the current theory to ``scratch'' (nothing but the
logic prelude).
\item[clearpredicative:] As {\tt clear}, with the additional
``predicativity'' restrictions on abstraction. Of technical interest only.
\item[gettheory $<$string$>$:] The argument is the name of a theory. Get
this theory off the desktop (backing up the current theory unless the
name of the current theory itself is the argument).
\item[backuptheory:] Save the current theory on the desktop.
\item[exportthm, exportthmlist (parameters not indicated):] Commands
for theorem export. Parameters of exportthmlist are a view name, a
prefix to add to constants automatically generated, a prefix to add
to operators automatically generated, a list of theorems to be
exported, and the name of the target theory (which must be on the
desktop). Parameters of exportthm omit the prefix for operators and
replace the list of theorems with a single theorem. Theorem export
is likely to generate many warnings of repeated constant
declarations.
\end{description}
\subsection{Workspace}
A workspace consists of a number of ``saved'' proof environments, all
associated with the same theory, whose name is the name of the
workspace, and a current proof environment (one of the ``saved''
environments may be an older version of the current one).
Workspace commands include
\begin{description}
\item[theoryname] See the name of the current theory.
\item[start $<$term$>$:] Create a proof environment, with name a
default, and with the term under consideration given by the
argument. Any existing proof environment is backed up.
\item[startfor $<$term$>$ $<$term$>$:] As ``start'', except that the
name of the proof environment is specified by the first term
argument.
\item[getleftside {\em or\/} getrightside $<$string$>$:] Create a new
proof environment with starting term the appropriate side of the
theorem indicated by the argument and name set to the ``format'' of
that theorem.
\item[autoedit $<$string$>$:] Creates a proof environment with name,
left side of equation and right side of equation determined by the
theorem whose name is the argument.
\item[getenv $<$term$>$:] Get the proof environment named by the
argument, backing up the current one unless the current one is the
one being retrieved.
\item[saveenv $<$term$>$:] Save the current proof environment with the
indicated name.
\item[backupenv:] Save the current proof environment under its own
name or as ``backup'' if its name is the default.
\end{description}
\subsection{Proof Environment and Theorem}
The ``proof environment'' and ``theorem'' data types are closely
related and will be discussed together here.
A proof environment has five ingredients: a name (which can usually be
thought of as a string but must technically be regarded as a term,
because it may be parsed to determine the ``format'' of the theorem to
be proved (no longer true; name a format are now distinct)), the left
side of the equation being proved, the right side of the equation
being proved, a list of names of axioms on which the theorem to be
proved depends, and, finally, a list of booleans which represents the
position of a specific subterm of the right side of the equation to be
proved.
A theorem has a name, a ``format'' (a term which determines with what
arguments the theorem should be invoked), a left side, and right side,
a list of dependencies, and a list of theorems used by the module
system.
It appears that the name and format ought to be distinct elements of
proof environments (this is now the case). Dependencies are expected
to be expanded to include dependencies on definitions and on programs;
this will improve reaxiomatization and theorem export facilities.
There are no commands specifically manipulating the theorem type
(theorems appear as components of theories, for which see below).
Proof environment commands include the following (for commands
creating proof environments, see the workspace subsection; for command
which actually prove theorems, see the theory subsection):
\begin{description}
\item[envname] See the name of the current environment.
\item[look {\em or\/} lookhere {\em or\/} lookback:] ``look'' displays
the right side of the equation (with the current subterm in braces)
above and the current subterm below. Invoked automatically by many
commands. ``Lookhere'' shows the current subterm only; ``lookback''
shows the left side of the equation.
\item[lookhyp $<$integer$>$:] Displays the hypothesis indexed by the
integer (see discussion of the hypothesis facility above).
\item[lookhyps:] Displays all hypotheses.
\item[seedeps:] See the list of names of axioms on which the current
theorem depends.
\item[startover {\em or\/} starthere {\em or\/} workback:]
Manipulations of general equation structure. ``Startover'' sets
both sides of the equation to the left side and clears dependencies,
and ``starthere'' does the same with the right side. ``Workback''
interchanges the two sides of the equation and leaves dependencies
unaffected.
\item[left {\em or\/} right {\em or\/} up {\em or\/} top:] Basic
``movement commands''. These cause the current subterm to be reset
to its ``left child'', ``right child'', ``parent'', or the whole
right side of the equation, respectively. ``Constant function''
terms have only one ``child'', which is treated as both left and
right child; if one ``moves'' left or right from an atomic term, one
gets an error condition, but it is currently necessary to move up or
to the top to escape this condition (the list of booleans does get
set to the non-existent position!)
\item[upto {\em or\/} downtoleft {\em or\/} downtoright $<$term$>$:]
Sophisticated movement commands; they move as indicated (up, or down
to left or right) until a term matching the argument is reached. If
no such term is found, an error is reported; the position one ends
up in depends on the command.
\item[New movement commands:] Commands {\tt uptols }, {\tt
uptors }, {\tt dlls }, {\tt dlrs },
{\tt drrs }, and {\tt dlrs } enable movement to
places where theorems can be applied. Commands beginning with {\tt
upto-} implement ``upto''; commands beginning with {\tt dr-} and
{\tt dl-} implement ``downtoleft'' and ``downtoright''; the suffix
indicates ``left side'' or ``right side''.
\item[ruleintro {\em or\/} revruleintro $<$term$>$:] Introduce an
application of a theorem or its converse (indicated by use of the
connectives ``$=>$'' or ``$<=$'' respectively in the displayed
result). The theorem may have parameters, thus the argument must be
a term. The dependencies of this theorem are added.
\item[matchruleintro {\em or\/} targetruleintro $<$term$>$:]
Introduce an embedded theorem or theorem converse which matches a
given equation (the first command) or will convert the current
subterm to a given target term (the second command).
\item[altrule:] If the current subterm is an infix term with infix
``$=>$'' or ``$<=$'', convert the infix to $=>>$, $<<=$ respectively
(or the reverse). This converts embedded theorems and converse
theorems to the form appropriate for lists of alternatives no more
than one of which should be applied.
\item[droprule:] Drop an embedded theorem from the top level of the
current subterm.
\item[delay {\em or\/} undelay:] Introduce or eliminate the special
prefix ``\#'' which converts operators with ``functional programs''
associated to ``inert'' form.
\item[lazy {\em or\/} unlazy:] Introduce or eliminate the full
laziness prefix {\tt \#\#}.
\item[execute {\em or\/} steps {\em or\/} onestep:] Invoke the
interpreter (resp. debugger) for the internal programming language.
In general terms, execute causes all embedded theorems to be applied
following a depth-first strategy; ``steps'' simulates ``execute''
step by step with some limited parallelism. More parallel versions
Execute and Steps of these commands are expected to be implemented.
The steps command goes into a loop which is broken by hitting ``q'';
it stops automatically if the term stabilizes. If you break out of
any of the tactic interpreters using Control-C, you should issue the
``top'' command to reset environment variables.
\item[applythm {\em or\/} applyconvthm $<$term$>$:] Apply the
indicated theorem or converse theorem (possibly with parameters);
does not apply any new embedded theorems produced by this
application. Useful for editing.
\item[applyenv {\em or\/} applyconvenv $<$term$>$:] As above, except
that saved environments are used.
\item[assign {\em or\/} assigninto $<$term$>$ $<$term$>$:] Carries out
global assignments to variables; can affect both sides of the
equation under construction. The first argument is a variable term.
The second argument is any term. In the case of ``assign'', the
second argument is substituted for the first throughout the left and
right sides of the current equation. In the case of ``assigninto'',
the left and right sides of the current equation are substituted for
the variable first argument in the second argument to obtain the new
left and right sides.
\item[assignit $<$term$>$:] Equivalent to the assign command with the
variable first argument given on the command line and the second
argument the current subterm.
\end{description}
\subsection{Theory}
A theory consists of a list of theorems, a list of constant
declarations, and some auxiliary lists of declarations. Theories can
be saved to and loaded from files.
The lists which comprise a theory at this time (other than lists used
by the parser which will be discussed below under Term) are
\begin{description}
\item[CONSTANTS:] Master list of constant declarations. Contains all
identifiers and operators; for operators, reports the relative types of
their arguments. Now contains comments.
\item[THEOREMS:] Master list of theorems. For the structure of a
theorem, see above.
\item[PRETHEOREMS:] List of identifiers or operators which will
eventually be proved as theorems (needed for recursive theorem
definitions).
\item[DEFINITIONS:] List of defined identifiers and operators with
their defining theorems.
\item[PROGRAMS:] List of bindings of identifiers and operators to
theorems used for simulation of functional programming.
\item[OPAQUE:] List of operators declared as opaque to abstraction.
\end{description}
These could reasonably be collapsed into one or two master lists.
Additional lists used by the parser referenced under Term below are
also an essential part of a theory.
Theory commands include the following:
\begin{description}
\item[declareconstant $<$string$>$:] Declare an identifier as a
constant.
\item[declaretypedinfix $<$integer$>$ $<$integer$>$ $<$string$>$:]
Declare an infix. The string is its form; the integers are the
``relative types'' of its left and right arguments.
\item[declareinfix $<$string$>$:] Declare a ``flat'' infix (with both
relative types 0)
\item[declareunary $<$string$>$:] Declare a unary prefix (its default
left argument will be {\tt defaultprefix}).
\item[newcomment $<$string$>$ $<$string$>$:] Enter new comments (the
second string) on the declared object named by the first string (any
older comments will be superseded). These will be displayed by
declaration and theorem display commands.
\item[appendcomment $<$string$>$ $<$string$>$:] As {\tt newcomment},
except that comments are appended to existing comments.
\item[makeunary $<$string$>$:] Give an existing infix the default
left argument {\tt defaultprefix}.
\item[declarepretheorem $<$string$>$:] Declare an identifier (or
infix) as a prospective theorem (so that it can be introduced as an
embedded theorem; useful in cases of recursion).
\item[declareopaque $<$string$>$:] Declare an operator as opaque to
abstraction. Used to declare new operator; an existing operator cannot be
made opaque.
\item[makescin {\em or\/} makescout $<$string$>$ $<$string$>$:] These
commands allow the user to declare the function or operator denoted by
the first string argument to have input or output (resp.) restricted
to absolute types (strongly Cantorian sets). The second string
argument must be the name of a theorem witnessing this fact.
\item[proveprogram $<$string$>$ $<$string$>$:] Bind to the constant
or operator named by the first argument the theorem named by the second
as a ``functional program''.
\item[deprogram $<$string$>$:] Remove any program bound to the
constant or operator named by the argument.
\item[pushtheorem {\em or\/} pushtheorem2 $<$string$>$ $<$string$>$:]
Hide the theorem named by the first string in the module associated
with the theorem named by the second string. The hidden theorem is
only visible when the parent theorem is executing, but its name
remains reserved. The first version of the command posts a comment to
the declaration of the hidden theorem automatically; the second does
not.
\item[poptheorem {\em or\/} poptheorem2 $<$string$>$ $<$string$>$:]
Reverses the effect of {\tt pushtheorem}. {\tt poptheorem} posts a comment
automatically to the declaration of the no longer hidden object; {\tt
poptheorem2} does not.
\item[axiom $<$string$>$ $<$term$>$ $<$term$>$:] Creates an axiom with
name the first string argument, left side the second argument and
right side the third argument. An axiom can be recognized by its
list of axiom dependencies consisting of its own name. Axioms
cannot take parameters, so the first argument does not need to be a
term.
\item[interaxiom $<$string$>$:] If the current subterm is an
equation, makes it an axiom.
\item[prove $<$term$>$:] Causes the equation expressed in the current
proof environment to be recorded as a theorem with name and
``format'' determined by the term argument (the argument will be
either the theorem name alone or the theorem name with a list of
parameters). The theorem to be proved cannot already be a theorem,
though it can be a pretheorem.
\item[reprove $<$term$>$:] Allows one to change an existing theorem;
the only restriction enforced is that the dependencies of the
theorem cannot get stronger. Axioms and definitions cannot be
reproven.
\item[autoprove {\em or\/} autoreprove:] As ``prove'' or ``reprove'',
except that the format of the theorem proved is the name of the
current workspace. Especially useful in a workspace created by
``getleftside'', ``getrightside'' or ``autoedit''.
\item[impredicative:] This command allows definitions of more general
form and allows the built-in RAISE theorem to work on non-flat
infixes. The default state of the prover is now impredicative.
\item[defineconstant $<$term$>$ $<$term$>$:] Introduce a theorem
defining a constant, with the same name as that constant. The
constant cannot previously have been declared. The first term is a
constant name or a constant applied to a list or lists of
parameters, which is the left side of the intended theorem; the
second term is the right side of the intended theorem. The
structure of the definition is checked. Definitions are recorded in
dependency lists like axioms.
\item[definetypedinfix $<$string$>$ $<$integer$>$ $<$integer$>$
$<$term$>$ $<$term$>$:] As defineconstant, except that an infix with
specified left and right types is defined. The first string
argument is the name of the theorem to be added (since a solitary
infix is not a theorem name). The fourth and fifth arguments
correspond to the arguments of defineconstant.
\item[defineinfix $<$string$>$ $<$term$>$ $<$term$>$:] Defines
``flat'' infixes (with both relative types 0). The first argument
is the theorem to be added; the second and third correspond to the
arguments of defineconstant.
\item[defineitconstant, defineittypedinfix, defineitinfix:] have
arguments as above, except for the last, which is replaced by the
current subterm.
\item[defineopaque $<$string$>$ $<$term$>$ $<$term$>$:] Defines opaque
operators. The arguments are as for ``definetypedinfix'', with the
omission of relative types. Stratification restrictions on the
definition are removed, but the use of the defined function in
definitions of anything non-opaque or in variable binding contexts
is strongly restricted.
\item[makeanaxiom $<$string$>$:] Makes the theorem indicated by the
argument an axiom.
\item[proveanaxiom $<$string$>$:] If the current proof environment contains
a proof of an axiom named by the argument, make that axiom a theorem with
the appropriate dependencies, making the appropriate adjustments of
dependencies through the whole theory.
\item[undefine $<$string$>$:] Convert the definition of the constant
or operator named by the parameter to an axiom.
\item[defineprimitive $<$string$>$:] If the theorem named by the
argument is of the appropriate form to serve as the definition of an
undefined notion, make it so.
\item[redefineconstant $<$string$>$:] If the current proof
environment could serve as a definition of a constant now defined,
make it so, with global dependency modifications as appropriate.
The argument is the new name for the old definition theorem.
\item[redefineinfix $<$string$>$:] If the current proof environment
could serve as a definition of an operator now defined, make it so,
with global dependency modifications as appropriate. The argument
is the name of the new defining theorem.
\item[Note:] The preceding commands support reaxiomatization of
theories. This facility can also be used to support the use of as
yet unproved lemmas.
\item[thmdisplay $<$string$>$:] Displays a theorem. Displays
comments. Displays names of module components (hidden theorems) but
not their contents.
\item[showalltheorems:] Displays all theorems (in a loop from which
any key other than Enter breaks out).
\item[moddisplay $<$string$>$:] Displays a module (a theorem and all
theorems hidden in its module).
\item[showallmodules:] Displays all modules (in a loop from which
any key other than Enter breaks out).
\item[showrelevantthms:] Displays all theorems which match (or whose
converses match) the current subterm nontrivially (i.e., it does not display
theorems which match all terms).
\item[showmatchthm $<$term$>$:] Displays a theorem (if there are any) which
justifies the equation given as the argument.
\item[showdec $<$string$>$:] Displays a constant or operator declaration. it
shows relative types and default left arguments in the case of operators usable
as prefixes. Displays comments.
\item[showalldecs:] Shows all declarations (in a loop as with
showalltheorems).
\end{description}
\subsection{Term Syntax and Display}
We describe the syntax of the input language. Tokens are either {\em
identifiers\/} (made of alphanumeric characters and the characters ``\_" and
``?") or {\em operators\/} (made up of special characters, excluding
parentheses, quotes, braces, and brackets).
Identifiers have two special subsets: numerals (all characters
numeric) and variables (starting with ``?"). Other identifiers are
called ``constants". Operators starting with a caret are operator
variables and an initial colon represents an operation on
operators which may be iterated.
Constants and operators other than variables must be declared to be used.
Predeclared constants include ``RAISE" (a built-in theorem), ``true" and
``false". Built-in operators include ``=", ``," (pairing), ``@" (function
application), ``$=>$", ``$=>>$", ``$<=$", and ``$<<=$" (theorem embedding
infixes), ``+!", ``*!", ``-!", ``/!", ``\%!", ``=!", and ``$<$!" (operators
of the built-in unsigned integer arithmetic).
Single identifiers of whatever class are terms. Any term enclosed in
parentheses is a term. A term preceded by an operator (here used as a
unary prefix) is a term. An operator flanked by terms is a term (here
the operator is used as an infix). A term enclosed in brackets is a
term (a constant function whose value is the enclosed term).
In the current version, each operator used as a prefix has associated
with it a default left argument (selected by the user). In other
words, all unary prefix terms are abbreviations for binary infix
terms. If an operator for which a default left argument has not been
declared is used as a prefix, a bogus value is used which signals an
error condition to the prover. It should be noted that the ``left"
movement command {\em can\/} for this reason be used with prefix
terms. The declareunary command above should be used for prefixes
which are not also to be used as infixes; the declareprefix command
below can be used to set desired behavior where overloading is
intended.
The default operator precedence assigns to all infixes and prefixes the same
precedence and associates to the right as far as possible. The user can
assign integer precedences to prefixes/infixes (and this information can be
stored in theories); higher precedences bind more tightly; even precedence
associates to the right and odd to the left.
Two lists are used by the parser and are actually components of the theory
type: PREFIXINFO contains operators with default left arguments and
PRECEDENCEINFO contains precedence information. Saved theories are actually
stored in default precedence in the current version, and the precedence list
is loaded at the end of the process. Parsing is done in a first pass
independent of declaration checking; the parser will parse and display terms
which the prover will tell you contain undeclared constants or operators.
There is some pretty-printing built into the prover's display function; some
indenting is done in a way which is intended to suggest the structure of
terms. There are commands to set the right margin of the display and to set
the ``depth" of the display.
Commands which relate to term display include:
\begin{description}
\item[declareprefix $<$string$>$ $<$string$>$:] Declares the default left
argument of a prefix (the first argument) to be a constant indicated by the
second argument.
\item[setprecedence $<$string$>$ $<$integer$>$:] Sets precedence of an
operator (the first argument) to an integer value (the second argument). Even
precedence operators associate to the right and odd precedence operators to
the left.
\item[declaretypeinfo $<$string$>$ $<$string$>$:] Assigns an implicit
strongly Cantorian type to a variable (the first parameter, without
leading {\tt ?}). The effect is to cause the type infix {\tt :} to
function as a unary operator on that variable with invisible left
argument determined by the second parameter. This information is
currently not stored in theory files.
\item[detypeinfo $<$string$>$:] Removes implicit type from a variable.
\item[showprecs:] Shows a table of precedences (including explicit
indication of left or right association).
\item[clearprecs:] Restores all precedences to the default.
\item[setline $<$integer$>$:] Set the right margin for term displays.
\item[setdepth $<$integer$>$:] Limit the depth to which terms are
displayed. Negative values cause all terms to be displayed; this is the
default.
\item[nodepth:] Removes any limit on depth of display (sets the relevant
variable to a negative value).
\end{description}
\subsection{The Logic of the Definition Facility}
The commands for the definition facility are given in the theory subsection.
Here we discuss the theory of the definition facility briefly.
A term is said to be a ``parameter list" if it is built from variables using
the operation ``," of pairing. A term is said to be in ``definition format"
if it is either a constant or a term in definition format applied to a
parameter list (using the infix ``@" of function application). A term is
said to be in ``infix definition format" if it is the result of applying a
unary prefix to a parameter list, the result of applying a binary prefix to
two parameter lists, or the result of applying a term in infix definition
format to a parameter list using the ``@" connective. The infix cannot be a
variable or colon-initial infix.
A term in either definition format has a ``head", the sole constant or infix
other than ``@" or ``," which appears.
A ``definition" is a theorem whose left side is in definition format or
infix definition format and which serves as the definition of the head
(whether constant or infix) of the left side of the equation.
A definition needs to satisfy a number of technical requirements. The head
(the notion being defined) cannot have been declared at the time the
definition is made. This avoids circular definition neatly. The right side
must have no variables other than those which occur in the left side and
must contain only declared constants and operators.
To understand the further requirements, it is necessary to explain the
relative type system of the prover. The types are indexed by integers
(including negative integers). Type $n+1$ can be thought of as the
type of functions from type $n$ to type $n$. Terms do not have fixed
types, but subterms of a term are assigned types relative to the whole
term. The relative types of left and right arguments of an infix
subterm are determined by adding the left and right types associated
with that infix by its declaration to the type of the whole term; the
relative type of a prefix term is handled in the same way using the
underlying implementation of prefix terms as infix terms; the relative
type of the immediate subterm of a constant function subterm (of its
value) is one less than that of the parent term.
The relevant additional constraint on the form of definitions is that
each variable in the theorem (considering it as a single equation
term) occurs with the same relative type everywhere in the term. In
addition, no variable should occur in a term with an ``opaque"
operator at the top level, and no variable should occur in the left
argument of a function application infix in a theory started with the
{\tt clearpredicative} command unless the ``impredicative" command has
been issued in that theory (this ``predicative'' state used to be the
default state of the prover -- old theory files may need the
impredicative command).
Definitions do not have any axiom dependencies. They are projected to have
definition dependencies soon so that definitions can be reconfigured in the
same way that axioms can now be reconfigured.
The logic of the definition facility is the ``higher-order" logic of the
prover; in conjunction with the logic prelude loaded as part of every
theory, it gives an implicit logical strength equivalent to Russell's Theory
of Types.
\subsection{List of Error Messages}
This may not be entirely up to date.
\begin{description}
\item[ {\tt Division by zero!}] Message from built in arithmetic with obvious meaning.
\item[ {\tt At beginning}, {\tt At end}] Messages from the {\tt scan} command
sent when you try to read before the first entry or after the last
entry in a tree.
\item[ {\tt to view,l to go left, r to go right, q to quit}] Sent by
the {\tt scan} command when an inappropriate letter is entered.
\item[ {\tt No negative precedence}] The {\tt setprecedence} command does not
permit negative precedence to be set.
\item[ {\tt Errors found in displayed term}] This message is issued by the
term display command when it encounters the error term {\tt Constant ""}.
\item[ {\tt Warning: adding leading parenthesis}] This warning is issued by
the parser when it is forced to add a leading parenthesis to make
sense of a term.
\item[ {\tt Subterm error}] This message is issued by the movement commands
when one attempts to move to a subterm of an atomic term.
\item[ {\tt Ill-formed hypothesis}, {\tt Ill-formed case expression}] These
messages are issued by the declaration checking function when it
encounters inapprorpaite uses of {\tt |-|} or case expressions without
a pair of alternatives as the second argument, respectively.
\item[ {\tt Illegal opacity declaration}] Issued by the commands for declaring
opaque operators when they do not succeed. Possible reasons include
ill-formed operator names, conflicts with previous declarations.
\item[ {\tt Undeclared constants or embedded theorems present}] Issued by a
declaration checking function of the prover when it finds a term
unacceptable, usually because it contains undeclared constants or
operators or was so ill-formed as to be impossible to parse.
\item[ {\tt Meaningless bound variable or unstratified abstraction error}]
Issued by a declaration checking function when it finds errors in
stratification or in indexing of deBruijn levels (variable binding) in
a term.
\item[ {\tt Illegal or repeated constant declaration of}, {\tt Illegal or
repeated infix declaration of}] (it will supply the declared object).
Issued by the declaration commands when a declaration is illegal or
such an object has already been declared. These messages will normally
be issued many times during theorem export.
\item[ {\tt Name already in use}] Issued by {\tt declareopaque} when an
attempt is made to redeclare a preexisting constant.
\item[ {\tt Not an infix}] Issued by {\tt declareopaque} when a non-infix
parameter is provided.
\item[ {\tt Constant is a numeral}] Issued by {\tt showdec} when one attempts
to show declaration of a numeral (numerals are ``predeclared'').
\item[ {\tt Opaque infix variable}] Issued by {\tt showdec} when one attempts
to show declaration of an operator variable declared as opaque.
\item[ {\tt Constant not found}] Issued by {\tt showdec} when no constant with
the given name is found.
\item[ {\tt Theorem not found}] Issued by {\tt thmdisplay} when there is no
such theorem. Also used by {\tt reprove}.
\item[ {\tt Built-in operation}] Issued by {\tt thmdisplay} when one attempts
to display a ``theorem'' which is actually a hard-wired tactic.
\item[ {\tt No definition found}] Issued by {\tt seedef} when no such defined
notion is found.
\item[ {\tt Program not found}] Issued by {\tt seeprogram} when no program is
bound to the given operator or function (or when the operator or function
itself is not understood).
\item[ {\tt Not an equation!}] Issued by commands which match theorems when
the object presented to match with theorems is not an equation.
\item[ {\tt No such theorem}] Issued by {\tt showmatchthm} when no matching
theorem is found. Also issued by {\tt applythm} and {\tt
applyconvthm} when they do not recognize their arguments.
\item[ {\tt Illegal theorem declaration}] Issued by {\tt axiom} when parsing
or declaration problems prevent an axiom from being accepted.
\item[ {\tt Stratification, opacity or impredicativity error in definition}]
Issued by the {\tt defineconstant} command when stratification or
related considerations prevent a definition from being accepted.
\item[ {\tt Declaration or syntax error in definition}] Issued by the {\tt
defineconstant} command when undeclared objects appear or there
are parsing problems; this includes the case of circular definitions.
\item[ {\tt Stratification or opacity error in infix definition}] Issued by
infix definition commands when stratification-related considerations
cause a definition to be rejected.
\item[ {\tt Warning: automatically declaring default left argument of defined
prefix}] Issued by infix definition commands when an operator is
implicitly defined as unary.
\item[ {\tt Declaration or syntax error in infix declaration}] Issued by infix
definition commands when undeclared objects or syntax errors appear in
a definition. This includes attempts at circular definition.
\item[ {\tt Argument is not defined appropriately}] Issued by the {\tt
undefine} command when the object supplied to be ``undefined'' was not
defined in the first place.
\item[ {\tt Inappropriate argument}] Issued by the {\tt defineprimitive}
command when it is given an unsuitable theorem or a non-theorem.
\item[ {\tt Definition format error}] Issued by {\tt defineprimitive} when it
is given a theorem which does not have the form of a definition.
\item[ {\tt Not a theorem}] Issued by {\tt makeanaxiom} when the argument is
not a theorem.
\item[ {\tt Not applicable to definitions}, {\tt Not applicable to built-in
theorems}] Issued by {\tt makeanaxiom} when argument is a definition or
a built-in theorem.
\item[ {\tt Illegal pretheorem declaration}] Issued by {\tt declarepretheorem}
when syntax or declaration errors cause it to reject a declaration.
\item[ {\tt Theorem is not of right form}] Issued by {\tt proveprogram} when
theorem is not of the correct form to be bound to the given function
or operator.
\item[ {\tt Impossible error in program declaration}] This error message
should never be issued by {\tt proveprogram}, barring bugs.
\item[ {\tt Undeclared object error or program already present}] Issued by
{\tt proveprogram} when object is undeclared or already has a program
bound to it.
\item[ {\tt Deprogram what???!}] Issued by {\tt deprogram} when it does not
recognize its argument.
\item[ {\tt No program to delete or change}] Issued by {\tt deprogram} when
there is no program to remove.
\item[ {\tt At top already}] Warning issued by the {\tt up} command when one
attempts to go up from the top of a parse tree.
\item[ {\tt No such hypothesis}] Issued by {\tt lookhyp} when its argument is
inappropriate.
\item[ {\tt Bad environment name or corrupt environment}] Issued by {\tt
saveenv} or {\tt backupenv} when an attempt is made to save or back up
a corrupt environment or to save an environment with an ill-formed
name. Warning: the message will appear when a perfectly good new term
is started if the previous environment was corrupt, due to the
automatic attempt to back up the bad environment.
\item[ {\tt Reconstructing environment from theory}] Remark made by {\tt
getenv} when it is loading an environment from a saved theory into the
local data structure which stores environments.
\item[ {\tt Environment not found}] Issued by {\tt getenv} when it finds no
environment named by the argument.
\item[ {\tt Cannot drop backup}] Issued by {\tt dropenv} when it is told to
drop the current backup environment.
\item[ {\tt No match found}] Issued by {\tt upto} and related commands when no
matching term is found to which to move.
\item[ {\tt Invalid argument}] Issued by {\tt upto} and related commands when
the argument doesn't pass a declaration check (or doesn't parse).
\item[ {\tt What theorem??!!}] Issued by commands like {\tt uptols} when
argument is not recognized as a theorem.
\item[ {\tt Built in theorem not supported by apply(conv)thm}] Issued by {\tt
applythm} and {\tt applyconvthm} when they encounter built-in theorems
which they do not support.
\item[ {\tt No such environment}] Issued by {\tt applyenv}, {\tt applyconvenv}
when argument is not recognized.
\item[ {\tt Corrupt environment}] Issued by {\tt applyenv}, {\tt applyconvenv}
when rerenced environment is corrupt (this actually should not
happen).
\item[ {\tt Undeclared constants in theorem to be proved}] Issued by {\tt
prove} when theorem does not pass a declaration check. This may
signal stratification errors as well as declaration errors.
\item[ {\tt Conflict with existing theorem name (or illegal name or format)}]
Issued by {\tt prove} when it objects to the proposed name or format
of a theorem.
\item[ {\tt Environment name not appropriate}] Issued when an attempt is made
to use {\tt autoprove} when there is not a user-defined environment
name.
\item[ {\tt Cannot modify an axiom}], {\tt Cannot modify a definition} Issued
by {\tt reprove} when an attempt is made to modify an axiom or
definition.
\item[ {\tt Introduces additional dependencies}] Issued by {\tt reprove} when
it rejects a change in a theorem because it strengthens dependencies.
\item[ {\tt This command can only be issued at top of term}] Issued by {\tt
interaxiom} when an attempt is made to interpret a proper subterm as
an axiom.
\item[ {\tt Current subterm is not of correct form}] Issued by {\tt
interaxiom} when the current subterm is not an equation.
\item[ {\tt Target is not an axiom, is a definition, or does not match
environment}] The all-purpose error message of {\tt proveanaxiom}.
\item[ {\tt Incorrect new name for old definition}] Issued by {\tt
redefineconstant} when declaration collisions or syntax make the
proposed new name for the old definition inappropriate.
\item[ {\tt Source is not a suitable definition}] Issued by {\tt
redefineconstant} when it does not like the proposed new definition.
\item[ {\tt Declaration or syntax error in redefinition}] Issued by {\tt
redefineconstant} probably only when the proposed new definition would
be circular.
\item[ {\tt Inappropriate arguments}] Issued by {\tt redefineinfix} for a
variety of offenses against declarations.
\item[ {\tt Stratification or opacity error in infix redefinition}] Issued by
{\tt redefineinfix} when stratification or related considerations
cause new definition to be rejected.
\item[ {\tt Declaration or syntax error in infix redeclaration}] Issued by
{\tt redefineinfix} probably only when definition would be circular.
\item[ {\tt Illegal substitution attempted}] Issued by {\tt assign} and
relatives when bad substitutions are attempted.
\item[ {\tt No rule to drop!}] Issued by {\tt droprule} when it is invoked
inappropriately.
\item[ {\tt No rule to toggle}] Issued by {\tt altrule} when it is invoked
inappropriately.
\item[ {\tt Theorem or pretheorem not found}] Issued by {\tt ruleintro} when
it does not recognize its argument as a theorem.
\item[ {\tt Ill-formed theorem name}] Issued by {\tt ruleintro} when it cannot
parse its argument to find a theorem name.
\item[ {\tt Declaration error}] Issued by {\tt ruleintro} when its argument
contains undeclared constants or cannot be parsed at all.
\item[ {\tt No matching theorem found}] Issued by {\tt matchruleintro} or {\tt
targetruleintro} when it cannot find a matching theorem to introduce.
\item[ {\tt Bad arithmetic evaluation}] Internal error -- I doubt that the
prover can actually issue it to a user.
\item[ {\tt No such theory to get}] Issued by {\tt gettheory} when it does not
understand its argument.
\item[ {\tt Won't drop backup of current theory}] Issued by {\tt droptheory}
when it declines to drop the current backup.
\item[ {\tt No such theory found to drop}] Issued by {\tt droptheory} when it
does not understand its argument.
\item[ {\tt View of that name already exists}] Issued by {\tt declareview} in
event of name collision.
\item[ {\tt No such constant to project}] Issued by {\tt viewasin} when it
does not understand its local declaration argument.
\item[ {\tt Cannot export infix without its definition}] Issued by {\tt
viewasin} when an attempt is made to export a defined operator without
its definition.
\item[ {\tt Already found in view}] Issued by {\tt viewasin} when there is
already a binding in the view.
\item[ {\tt Such an item is not found in such a view}] The all-purpose
errormessage of {\tt dropfromview}.
\item[ {\tt No such view to drop}] Issued when {\tt dropaview} does not
understand its argument.
\item[ {\tt No such view}] Used by {\tt viewasin}, {\tt viewofin}, {\tt
showview}, or a theorem export command when it does not recognize its
view argument.
\item[ {\tt No such item in view}] Used by {\tt viewofin} when it does not
recognize its binding argument.
\item[ {\tt Bad deps in export list}] An error in theorem export. The
theorem(s) to be exported depend on axioms not found in the view to be
used.
\item[ {\tt Constant in view is not declared}] An error in theorem export. A
constant in the view is not declared.
\item[ {\tt Constant definition matching error},{\tt Defined infix matching
error}] An error in theorem export. A definition does not match.
\item[ {\tt Missing declarations in target}] An error in theorem export.
Something the view leads one to expect is not present in the target
theory.
\item[ {\tt Infix typing error}] An error in theorem export. The types of an
operator do not match correctly.
\item[ {\tt No such remote theory}] Theorem export command does not recognize
the target theory.
\item[ {\tt Construction of export list failed}] An error in theorem export.
Export list could not be constructed.
\item[ {\tt Predicativity error in export}] An error in theorem export.
Source theory is impredicative and target theory is predicative.
\item[ {\tt Opacity error in export}] An error in theorem export. Something
in target theory is opaque and its counterpart in source is not.
\item[ {\tt Theory match failure}] An error in theorem export. Source and
target theories do not match as claimed by the view.
\item[ {\tt Export aborted; theory restored}] Remark issued on backtracking
from a failed theorem export.
\item[ {\tt Illegal character command}] Message from the {\tt walk} interface
when it does not recognize a command.
\item[ {\tt Unknown or inappropriate theorem(s)} ] Message from the
{\tt pushtheorem} or {\tt poptheorem} commands when given
inappropriate arguments; the {\tt forget} command sends a similar
message (in the singular).
\end{description}
\section{Tutorial}
The MARK2 theorem prover is an upgrade of the theorem prover EFTTP, an
equational prover implementing first-order logic, to allow the
implementation of higher order logic.
The prover is equational: all theorems are equations, and the rules
of inference supported directly are exactly the basic rules of
equational reasoning that we all learn (implicitly at least) in
algebra.
The prover incorporates a programming language. This was originally
developed to allow the easy implementation of ``tactics'', procedures
for the automatic implementation of many steps of reasoning without
user intervention. Tactics or programs look to the system like
equational theorems, and are stored in theories along with theorems of
the normal sort; we think that the way in which this is achieved has
some interest. The fact that the language has its own tactic-writing
facility makes it much less dependent on the languages in which it is
written (two dialects of ML) than was originally expected. We are
currently working on a C++ implementation, which will allow more
precise control over details of memory allocation and other aspects of
execution.
The prover is interactive. The user enters a term and manipulates
it, possibly with the help of automatic tactics, until he or she
obtains the desired final form of the term. The user can then prove a
theorem equating the initial and final forms of the term, which
acquires the same status as the axioms and previously proved theorems
of the system. Tactics are developed in the same way. Facilities are
also provided for ``debugging'' tactics (programs).
The last couple of paragraphs of this introduction are technical and
can safely be skipped.
The semantics of the prover are based on a system of relative typing
(``stratification'') analogous to that which underlies Quine's set
theory ``New Foundations''. It is closely related to a streamlined
version of the type system of the typed $\lambda$-calculus widely used
in computer science, and can often be regarded as implementing this
streamlined system of types with polymorphism. The type system is
completely invisible to an unsophisticated user!
The types in this ``streamlined'' type system are labelled by
non-negative integers $n$, with each type $n+1$ = $n \rightarrow n$
(functions from type $n$ to type $n$) and each type $n$ identified
with type $n \times n$ (pairs of objects of type $n$).
The consistency difficulties of ``New Foundations'' do not apply here;
this system is related to the version {\em NFU\/} of Quine's system
which is known to be consistent. For a technical discussion of the
theory of stratified $\lambda$-calculus, see our preprint ``Untyped
$\lambda$-calculus with relative typing''.
\subsection{Building the Prover if You are Somewhere Else}
Run the source code under SML/NJ (Standard ML of New Jersey) then
issue the command
\begin{verbatim}
exportML "sml_prover";
\end{verbatim}
to create an executable file (actually a version of SML/NJ with the
prover commands pre-loaded).
\subsection{The Platform}
MARK2 is implemented in SML/NJ (Standard ML of New Jersey) and Caml
Light (another dialect of ML, which runs on PC's!)
To start the SML version from UNIX on onyx, first type
\begin{verbatim}
sml_prover
\end{verbatim}
You will get some verbiage and an SML prompt.
The fact that we are in ML has two effects. One is that you will
always issue commands at the ML prompt (a hyphen). Another is that ML
will send you some messages (it reports the type of its output to you
after each command). I don't know a way to suppress this.
The direct effect it has on you has to do with the syntax of command
lines. Each prover function is actually an ML function. Those which
have arguments take string arguments, which must be enclosed in double
quotes (there are a couple of commands which take integer parameters;
this will be pointed out). ML insists that those which do not have
arguments take a dummy argument, written () and known affectionately
as ``unit''. ML command lines must end with semicolons. In the Caml
Light version, all command lines must end with {\em two\/} semicolons.
Sample command lines:
\begin{verbatim}
showalltheorems(); (* a command with no parameters *)
axiom "COMM" "?x+?y" "?y+?x"; (* a command with three string
arguments *)
\end{verbatim}
If you goof up in ML, ML will send you error messages. There is no
easy way to damage your theorem prover session by outraging ML; just
ignore commands which lead to ML errors and try again.
To abort a line and start over, type Control-C. To leave ML
altogether, type Control-D. If Control-C is typed to break out of one
of the command interpreters (execute, steps, or onestep) be sure to
use the top command to reset environment variables.
To save your work in MARK2 in a new ``theory file'', type
\begin{verbatim}
storeall "";
\end{verbatim}
Theory files have the extension ``.thy3'', added automatically by the
``storeall'' command. To open an existing theory file, issue the
command
\begin{verbatim}
load ""; (* don't include the .thy3 extension! *)
\end{verbatim}
A lot of stuff will happen; you may have to wait a little while if the
theory file is large.
If you have introduced a theory name by using either of the above
commands, the command
\begin{verbatim}
safesave();
\end{verbatim}
will automatically save your work to the correct file.
\subsection{The Input Language}
Terms to be manipulated by the prover as representing objects in
mathematical theories are entered by the user and displayed by the
prover in the MARK2 ``input language''.
The syntax of the input language is very simple. There are two types
of tokens. {\em Identifiers\/} are strings of letters, numbers,
question marks and underscores. Identifiers which begin with question
marks are used as variables; all others are used as constants (when so
declared by the user). {\em Operators\/} (used as either infixes or
prefixes) are strings of special characters (not including brackets,
braces, or parentheses). Those which begin with carets (shift-6) are
operator variables; operators which begin with colons are the product
of an operator construction to be described below. Operators which a user
declares will not begin with colons or carets.
A term is either of the form $<$identifier$>$, the form $[<$term$>]$
(such terms are called ``constant function terms'', for reasons which
will become evident), the form ($<$operator$>$ $<$term$>$) (such terms
are called ``prefix terms'') or the form ($<$term$>$ $<$operator$>$
$<$term$>$) (such terms are called ``infix terms'').
It is not necessary to enter all parentheses in the above term forms.
The rule followed is the old APL rule; all operators associate to the
{\em right\/} as far as possible. Spaces are not significant, except
where they are required to separate adjacent operators (only occurs when
an infix is followed immediately by a prefix). The prover
automatically displays terms with the least possible use of
parentheses. Variables do not need to be declared.
The best way to get to know the input language is to enter into
dialogue with MARK2. It is first necessary to be aware that all
constant identifiers and operators used must be declared. It is also
necessary to use declarations to tell the prover that an operator
can be used as a prefix.
Try entering
\begin{verbatim}
start "?x+?y";
\end{verbatim}
You should get an error message telling you that an undeclared
constant has been used. You should also see two slightly reformatted
copies of the term you have entered. The ``start'' command always
displays two copies of the term entered, for reasons which will become
evident below.
Now type
\begin{verbatim}
declareinfix "+";
\end{verbatim}
and repeat the ``start'' command from above. MARK2 should no longer
complain.
Now try the following three commands:
\begin{verbatim}
start "(?x+?y)+?z";
start "?x+(?y+?z)";
start "?x+?y+?z";
\end{verbatim}
Feel free to try more complex terms; explore what is meant by
``associating to the right''. Try including a constant like ``a''
(not beginning with a question mark) and see what MARK2 says. Try
putting in numerals like ``0'' or ``12''; you will find that MARK2
will not give us trouble with declarations in this case.
Now we look into prefixes. Enter the command
\begin{verbatim}
declareunary "-";
\end{verbatim}
You will discover that this makes possible the use of ``--'' as both
an infix and a prefix operator. Explore this using the ``start'' command.
Especially notice the difference between
\begin{verbatim}
start "?x+-?y"; (* There is no infix +- *)
start "?x+ -?y"; (* The space is necessary *)
\end{verbatim}
The form of the command for declaring constants like ``a'' is
\begin{verbatim}
declareconstant "a";
\end{verbatim}
Declare some literal constants and mix them into expressions
introduced with the ``start'' command. Try declaring some new infixes and
prefixes.
Try outraging the declaration system by declaring constants or operators
which have already been declared, or by declaring constant identifiers
with special characters, ``constant'' identifiers beginning with
question marks, operators with letters in them, and so forth. MARK2 has
a large library of error messages :-)
Just for the sake of completeness, we explain the mechanics behind
prefixes. A prefix expression is always understood by the system as
an abbreviation for an infix expression. Each operator which can be
used as a prefix has associated with it a ``default value'' in such a
way that the infix term ($<$default-value1$>$ $<$infix1$>$
$<$term1$>$) (where the appropriate default value depends on the
infix) is the intended meaning of the prefix term ($<$infix1$>$
$<$term1$>$) If the former term is entered, the latter term will be
displayed. The default value associated with an operator is an error
value which will cause MARK2 to complain; the default value associated
with infixes declared using ``declareunary'' is {\tt defaultprefix}
(all operators without exception can legally be used as infixes; there
is no way to declare an operator which can only be used as a
prefix).
The command which associates a given default value with an operator is
\begin{verbatim}
declareprefix "" "";
\end{verbatim}
To test out this system, try the following commands:
\begin{verbatim}
declareconstant "true";
declareinfix "~";
start "true~?x";
start "~?x'';
declareprefix "~" "true";
start "true~?x";
start "~?x";
\end{verbatim}
Try entering terms like ``$0-?x$'' or complex terms with embedded
subterms of that sort.
Save your theory using the command
\begin{verbatim}
storeall "tutorial";
\end{verbatim}
To start again, use
\begin{verbatim}
load "tutorial"; (* in the same directory you were in when you stored
it! *)
\end{verbatim}
and it can be saved then using
\begin{verbatim}
safesave();
\end{verbatim}
\subsection{Navigation within Terms}
Enter the term $?a+?b$ using the ``start'' command. Now type the
command
\begin{verbatim}
right();
\end{verbatim}
The lower occurrence of $?a+?b$ should be replaced with its right
subterm $?b$. Now type
\begin{verbatim}
up(); (* you should return to the whole term *)
left(); (* you should see the left subterm *)
left(); (* you should get an error message *)
top(); (* takes you back to the top level *)
\end{verbatim}
Enter a more complex term like $(?a+?b)+?c+?d$ and use the movement
commands ``right'', ``left'', ``up'', and ``top'' until you have a
feeling for how they work. To recover from the {\em Subterm error}
condition you will have encountered above, either (possibly repeated)
use of ``up'' or a single use of ``top'' will work.
The official description of these phenomena follows. The display
which you have seen following each application of the ``start''
command shows two terms: the top term is the complete term currently
being considered, while the lower term is the ``current subterm'' of
that term.
The ``left'' or ``right'' command causes the ``current subterm'' to
become the left or right subterm of the erstwhile current subterm.
The left or right subterm of an infix term is defined in the obvious
way. The left and right subterms of a constant function term $[x]$
are both the subterm $x$. Atomic identifiers (constant or variable)
have no left or right subterms, and so an error condition is reported.
Prefix terms do have left subterms (the ``default values'' associated
with their prefixes): try entering
\begin{verbatim}
start "-?x";
left();
\end{verbatim}
The ``up'' command causes the current subterm to become the smallest
subterm of the complete term which properly contains the erstwhile
current subterm; if one is at the top level already, an error message
is sent. The ``top'' command causes the current subterm to become the
complete term under consideration.
The underlying metaphor views terms as (upside-down) trees, with the
complete term as the root (at the top!) and each subterm having its
immediate subterms growing as branches below it.
The more complex movement commands ``upto'', ``downtoleft'', and
``downtoright'' enable one to move to subterms remote from ones
current ``position''; they will be described under the description of
individual commands in the Appendix.
If one is in doubt about how to parse a term of the input language,
the movement commands provide a way of exploring the structure of a
term. For example, the left subterm of $?a+?b+?c+?d$ is $?a$ and the
right subterm is $?b+?c+?d$, which might be a useful discovery if one
were uncertain about our associativity convention. Look at the
structure of the term $?a*?b+?c$ (after declaring an operator *); it is
not what you might intuitively expect, since there is no precedence of
operators in the language.
A command which is often useful is the command
\begin{verbatim}
look();
\end{verbatim}
which allows one to see the complete term and the current subterm
again; some prover commands leave you not able to see these.
\subsection{Starting to Prove Something}
So far we can write down terms, but neither we nor the prover knows
anything about what they mean. We begin to remedy this.
We describe the fundamental structure of a prover session. The user
enters a term using the ``start'' command. We call this the initial
term. The user performs a series of manipulations of this term
(sometimes of the initial term as well), guaranteed to preserve the
truth of the equation between the initial term and the current term.
The user finally issues a command, usually
\begin{verbatim}
prove "";
\end{verbatim}
which records the theorem $<$initial-term$>$ = $<$current-term$>$ under
the name $<$name-of-theorem$>$. The manipulations allowed in the
intermediate steps are most often applications of equational theorems,
and the new theorem just proved will be added to the current
``theory'' (library of theorems) and will be usable in subsequent
proofs.
Of course, not all theorems in the library can have been proven; some
are introduced initially as axioms or as definitions. Definitions
will be discussed below; we now show how to enter an axiom. Issue the
following command.
\begin{verbatim}
axiom "COMM" "?x+?y" "?y+?x";
\end{verbatim}
A display produced by the prover will follow. The prover now knows
that the operator ``+'' is commutative.
The format for an axiom declaration should be clear from the above
example; the ``axiom'' command has three parameters, the first being
the name of the axiom, the second being the left side of the equation
being asserted as an axiom, and the third being the right side of the
equation.
Our assertion that the prover ``knows'' that addition is commutative
requires some support. Try out the following sequence of commands.
\begin{verbatim}
start "?x+?y+?z";
ruleintro "COMM";
\end{verbatim}
The current term now looks like this:
\begin{verbatim}
COMM => ?x + ?y + ?z
\end{verbatim}
The infix $=>$ is a predeclared operator. The value of $?x=>?y$ is
the same as that of $?y$, with the annotation that one intends to
apply the theorem $?x$ to the term $?y$. Such terms are called
``embedded theorem applications''.
Now issue the command
\begin{verbatim}
execute();
\end{verbatim}
which carries out all embedded theorem applications in the current
subterm. You should now see the term
\begin{verbatim}
(?y + ?z) + ?x
\end{verbatim}
which is the result of applying the commutative law of addition to the
original term.
There is another opportunity to apply the commutative law here; this
can be realized by the sequence of commands
\begin{verbatim}
left();
ruleintro "COMM";
execute();
\end{verbatim}
which should leave you with the term
\begin{verbatim}
(?z + ?y) + ?x
\end{verbatim}
You should now be able to see why the movement commands are provided.
Next, use ``ruleintro'' to propose application of the commutative law
to both the complete term and the subterm at the same time, then use
the ``top'' command to return to the complete term, and only then
issue the ``execute'' command. Both applications of the commutative
law will be carried out at once, returning the term to its original
shape. The form of the term before you issue the ``execute'' should
be
\begin{verbatim}
COMM => (COMM => ?z + ?y) + ?x
\end{verbatim}
As the last exercise illustrates, the commutative law of addition is
completely symmetrical in its effect. We introduce another axiom, by
the command
\begin{verbatim}
axiom "ASSOC" "(?x+?y)+?z" "?x+?y+?z";
\end{verbatim}
with which we will explore further features of theorem application.
Issue the following sequence of commands:
\begin{verbatim}
start "?a+?b+?c";
ruleintro "ASSOC"; execute(); (* you can write more than one command
on a line! *)
\end{verbatim}
Did you expect something to happen? (It is useful to note that an
embedded theorem which does not apply to its target is simply removed
by the ``execute'' command).
The difficulty is that the theorem application represented by the
infix $=>$ always represents the result of attempting to match the
target term with the {\em left\/} side of the theorem; the command
which works in this situation is
\begin{verbatim}
revruleintro "ASSOC";
\end{verbatim}
upon which we see
\begin{verbatim}
ASSOC <= ?a + ?b + ?c
\end{verbatim}
An infix expression $?x<=?y$ has the value of $?y$ and signals the
intention of applying the theorem $?x$ in reverse; that is, attempting
to match the term $?y$ with the {\em right\/} side of the equation.
Now, the ``execute'' command gives the desired result
\begin{verbatim}
(?a + ?b) + ?c
\end{verbatim}
A useful exercise is to take the term $(?a+?b)+?c+?d$, or a similar
complex term, and try applying the associative law to it in each
direction. A useful command to know about is
\begin{verbatim}
startover();
\end{verbatim}
which restores the initial term.
We now prove a theorem, the theorem $?x+?y+?z = ?z+?y+?x$, which we
will call ``REVERSE''. Enter this sequence of commands:
\begin{verbatim}
start "?x+?y+?z";
revruleintro "ASSOC"; execute();
ruleintro "COMM"; execute();
right(); ruleintro "COMM"; execute();
prove "REVERSE"; (* the new theorem should now be displayed *)
startover();
ruleintro "REVERSE"; execute(); (* the new theorem can be applied
like the old ones *)
\end{verbatim}
Type
\begin{verbatim}
thmdisplay "REVERSE";
\end{verbatim}
to get another look at your theorem (this is a useful command to know
anyway). Note especially the list of strings at the bottom of the
display; this indicates the axioms on which ``REVERSE'' depends.
Another command of the same genre is
\begin{verbatim}
showalltheorems(); (* after typing this, hit enter repeatedly *)
\end{verbatim}
Each time you hit enter, a theorem will be displayed; it will go
through the entire list. Depending on the context you are working in,
you may see some extra theorems you don't know anything about. You
can use any key other than Enter to break out.
An exercise which one might attempt now is to introduce the axioms of
commutativity and associativity for multiplication (represented by *)
as well as the distributive law of multiplication over addition, and
prove the usual FOIL rule $(?x+?y)*(?z+?w)$ =
$(?x*?z)+(?x*?w)+(?y*?z)+(?y*?w)$. You might decide that you wanted
to prove some additional theorems to make this easier (for example, if
you stated the distributive law as $?x*(?y+?z) = (?x*?y)+(?x*?z)$, you
might want to prove the other form, $(?x+?y)*?z = (?x*?z)+(?y*?z)$,
for use in the proof of the FOIL theorem.
A subtlety of theorem use can be illustrated if we introduce the axiom
``INV'': $?a+ -?a=0$. Issue the commands
\begin{verbatim}
start "0";
revruleintro "INV"; (* after declaring the axiom! *)
execute();
\end{verbatim}
If you declared the axiom exactly as described above, you will get
\begin{verbatim}
?a_1 + -?a_1
\end{verbatim}
The numerical suffixes (which may take a different value than 1) are
added to make collisions with variables that you are already using
unlikely. We can assign a specific value to the new variable {\tt
?a_1} by using the ``assign'' command:
\begin{verbatim}
assign "?a_1" "3";
\end{verbatim}
whereupon we have
\begin{verbatim}
3 + -3
\end{verbatim}
In this most common use of the ``assign'' command, it is expected that
the variable to which the assignment is made is {\em new\/}; it is
important to note that if an assignment is made to a variable which
appears in the initial term (the term you introduced with ``start'' at
the beginning of your session) that the variable will be replaced in
the initial term as well as in the current term; this is obviously
necessary if assignments to variables are to preserve an equation
between the initial term and the current term.
We use the command ``lookback'' which allows us to look at the initial
term without returning to it in the following example:
\begin{verbatim}
start "?x+?y";
ruleintro "COMM"; execute();
assign "?x" "?a+?b";
lookback(); (* the initial term will have changed! *)
look(); (* the current term is unchanged by the lookback command
*)
startover(); (* note that this does not undo the assignment *)
\end{verbatim}
With the ``ruleintro'', ``execute'', ``assign'', and ``prove''
commands, you have the basic tools needed to do laborious step by step
proofs by hand in MARK2. If you prove interesting theorems, be sure
to save your work (using the ``storeall'' command if you started from
scratch, or the ``safesave'' command if you are working in an existing
theory which you got with the ``load'' command, as described above).
\subsection{Definitions}
MARK2 provides a facility for defining new constants:
\begin{verbatim}
defineconstant "Four" "2+2";
\end{verbatim}
The effect of this command is to create a theorem with the same name
``Four'' as the constant being declared. The constant ``Four'' must
not have been declared previously! Notice that the list of axioms on
which this new theorem depends is empty; it is a ``freebie''. One
cannot define a constant which depends on variables.
It is also possible to define new operators. For example, we can
define a new infix ``**'' with the meaning ``sum of the squares of its
arguments'' as follows:
\begin{verbatim}
defineinfix "SUMOFSQUARES" "?x**?y" "(?x*?x)+?y*?y";
\end{verbatim}
Note that the user needs to supply a name for the theorem encoding the
definition.
It is also possible to define functions in the same general style as a
mathematical definition like $$f(x) = x^2 + x.$$
The equivalent MARK2 definition looks like this (try it):
\begin{verbatim}
defineconstant "SampleFunction@?x" "(?x*?x)+?x";
\end{verbatim}
The infix @ is a predeclared infix used to represent function
application: the mathematical term $f(x)$ would be written $?f@?x$.
Another predeclared infix is the comma, which is used to represent
the ordered pair construction, and can also be used in parameterized
definitions:
\begin{verbatim}
defineconstant "Add@?x,?y" "?x+?y";
\end{verbatim}
defines a function of two variables encoding the addition operation.
It is possible to define higher-order functions and to define operator
terms (prefix or infix) via their function values. Examples follow:
\begin{verbatim}
defineconstant "(Comp@?f,?g)@?x" "?f@?g@?x" (* composition of
functions *)
defineinfix "ADDFNS" "(?f++?g)@?x" "(?f@?x)+?g@?x"
\end{verbatim}
There are restrictions on definitions having to do with the underlying
type system of the prover. An example of a definition which will not
succeed is
\begin{verbatim}
defineconstant "Diag@?x" "?x@?x";
\end{verbatim}
We will not go into the details at this point; generally, if one
defines functions which make sense mathematically, one will not run
afoul of the type system (but don't try to define general terms from
$\lambda$-calculus without reading about the details of the type
system below).
The definition system is secure against circular definition or the
presence of unacknowledged parameters.
The definition of any concept creates a theorem; one uses that theorem
with ``ruleintro'' to eliminate the defined concept and that theorem
with ``revruleintro'' to introduce the concept. Try out the following
session:
\begin{verbatim}
defineconstant "Square@?x" "?x*?x";
start "(?a+?b)*(?a+?b)";
revruleintro "Square"; execute();
ruleintro "Square"; execute();
\end{verbatim}
\subsection{Simple Programming and Debugging}
Suppose in this subsection that we have available the axioms ``COMM'':
$?x+?y=?y+?x$ and ``ZERO'': $0+?x=?x+0$.
An obvious theorem to prove (prove it) is ``COMMZERO'': $?x+0=?x$.
But we usually feel that ``ZERO'' and ``COMMZERO'' are not really
separate theorems; it would be nice to be able to express that adding
zero on either side of a term leaves it unaffected. We will prove a
theorem ``EITHERZERO'' which has this effect. The session proceeds as
follows:
\begin{verbatim}
start "?x+?y";
ruleintro "ZER0";
ruleintro "COMMZERO";
prove "EITHERZERO1";
\end{verbatim}
The theorem ``EITHERZERO1'' looks like this:
\begin{verbatim}
?x + ?y =
COMMZERO => ZERO => ?x + ?y
\end{verbatim}
Try applying it to terms $?x+0$ and $0+?x$. It seems to have the
desired effect.
The reason that this works has to do with the way the ``execute''
command works. The command applies all embedded theorems, including
ones introduced in the course of execution, always applying the
innermost ones first. You can get a look at how it works by using the
command
\begin{verbatim}
steps();
\end{verbatim}
instead of ``execute'', then hitting return repeatedly. It will trace
the execution of the various embedded theorems for you. The ``steps''
command can be used as a debugging tool. Use ``q''
to break out (it will stop automatically if the term stabilizes).
Now try applying the theorem ``EITHERZERO1'' to $0+?x+0$. You should
get $?x$; the tactic got a little overenthusiastic. You can figure
out why on your own or look at the execution using ``steps''. This
behaviour can be corrected:
\begin{verbatim}
start "?x+?y";
ruleintro "COMM";
ruleintro "COMMZERO";
altrule(); (* see what happens? *)
prove "EITHERZERO"; (* note difference in theorem names *)
\end{verbatim}
The new theorem looks like this:
\begin{verbatim}
?x + ?y =
COMMZERO =>> ZERO => ?x + ?y
\end{verbatim}
The $=>>$ infix represents an embedded theorem which will be applied
only if the theorem which was to be applied immediately before was
inapplicable to its target; so, if ``ZERO'' is applied, ``COMMZERO''
will not be applied; applying ``EITHERZERO'' to $0+?x+0$ will give
$?x+0$ instead of $?x$. An example of its use: apply the first of the
four theorems $x1$, $x2$, $x3$, and $x4$ which can be applied to a
term $y$, use $x4 =>> x3 =>> x2 =>> x1 => y$ (the last one must be a
$=>$ or nothing will happen at all!); this avoids the possibility that
$x1$ might apply to $y$, and, say, $x3$ apply as well to the result,
which might happen if $=>$ were used throughout. To apply a theorem
in reverse only on failure of previous theorem, use $<<=$, which has
the same relation to $<=$ that $=>>$ has to $=>$. The ability to
construct a list of alternative theorems to be applied while being
certain that the outcome will not be that several of them will be
applied in sequence helps to make behaviour of complex theorems of
this kind more predictable.
With ``EITHERZERO1'' and ``EITHERZERO'', we are already doing
programming (recall that theorems which function as programs are
called ``tactics''). But we can do much more impressive things.
We now develop a tactic which applies ``ZERO'' aggressively, removing
every addition of zero that it can find!
\begin{verbatim}
declarepretheorem "ZEROES";
\end{verbatim}
The ``declarepretheorem'' command declares the identifier ``ZEROES''
and tells the prover that a theorem by this name will be forthcoming.
Normally, the ``prove'' command takes care of this automatically, but
this will not work in this case.
\begin{verbatim}
start "?x+?y";
right(); ruleintro "ZEROES"; up();
left(); ruleintro "ZEROES"; up();
\end{verbatim}
The prover takes our word for it that there will soon be a theorem
called ``ZEROES''.
\begin{verbatim}
ruleintro "EITHERZERO";
prove "ZEROES";
\end{verbatim}
Something is very fishy here. The theorem ``ZEROES'', which looks
like this:
\begin{verbatim}
?x + ?y =
EITHERZERO => (ZEROES => ?x) + ZEROES => ?y
\end{verbatim}
seems to be defined in terms of itself (which was why it was necessary
to declare it)! Try applying this theorem to a complex sum with lots
of parentheses and zeroes; it should hunt down and eliminate all the
zeroes. Watch it at work with ``steps'' (use ``q''
to break out of ``steps'').
The reason that the recursion terminates is that embedded theorems
simply disappear when applied to terms to which they are inapplicable;
notice if you use ``steps'' that the applications of ``ZEROES'' to
atomic terms do not produce further occurrences of ``ZEROES''. It is
definitely possible to write recursive tactics which will {\em not\/}
terminate, by the way; to break out of such a process, use Control-C
and then use the top command to reset environment variables (and then
use ``steps'' to see what went wrong).
The possibility of defining tactics in terms of themselves gives us
the ``looping'' (actually recursive) control structure; the fact that
theorems or tactics fail where they do not apply gives us a limited
``conditional'' control structure (refined by the use of $=>>$ and
$<<=$ to construct lists of alternative theorems to be applied), which
proves to be sufficient to break out of recursions. Elaborate mutual
recursions are possible, and are useful in practice.
The tactic language built into MARK2 is itself a programming language.
And programs written in this language are treated by the prover as
equational theorems, stored with other theorems in saved theories on
an equal footing.
An exercise would be to use the experience gained in writing the
theorem ``FOIL'' assigned above to write a tactic ``EXPAND'' which
aggressively expands terms written using addition and multiplication
as far as possible, applying the distributive law (in either form)
wherever possible. Another exercise is to write a theorem
``SUPERASSOC'' which will eliminate all parentheses from a complicated
sum, by applying the theorem ``ASSOC'' until it is no longer possible
to do so.
\subsection{Appendix: Reference for Individual Commands}
All arguments of commands must be enclosed in double quotes (except
for certain integer arguments); commands without arguments must be
supplied with the dummy argument (). All command lines end with a
semicolon. Double quotes and () will not be shown in the individual
command references.
%\begin{description}
\newpage
\subsubsection{Interfaces}
\begin{description}
\item[walk] The ``walk'' command invokes a new user interface
independent of ML (this is probably its only virtue!). Commands in
the ``walk'' interface are single characters. Arguments (which may
have spaces in them) are separated by tabs. The character versions
of commands are noted below under each command. To quit the walk
interface type ``q'' (to quit and save), ``Q'' (to quit without
saving). For help type ``h''; a list of character commands is
displayed.
\item[noml] The ``noml'' command implements an interface independent
of ML for which command format is almost the same; the unit type
parameter required by ML for commands with no real parameters is
optional at the noml interface. One does not get messages from ML,
but one also does not have a prompt, which may be distracting. {\tt
noml} uses the same file parser as the {\tt script} command.
\end{description}
\subsubsection{Theory Loading and Saving Commands}
\begin{description}
\item[load] ``load file-name'' causes the theory stored in
file-name.wthy to be loaded into the workspace. If a theory has
already been loaded, it is backed up on the desktop. Load sets the
``current theory'' environment variable to ``file-name'' as well.
(walk command ``G'').
\item[safesave] The safesave command has no arguments. It calls the
``storeall'' command with argument the ``current theory''. The
``walk'' interface version is ``S''.
\item[storeall] ``storeall file-name'' stores the current theory to
file-name.wthy. It also sets the ``current theory'' environment
variable to file-name. (walk command ``F'').
\item[clear] The clear command erases the current theory and loads
preambles (axioms always used). It does not affect other theories
on the desktop. It sets the ``current theory'' to ``scratch'' (walk
command ``C'').
\item[clearpredicative] As ``clear'', and imposes predicativity
restrictions on abstraction. Of technical interest only.
\item[versiondate] This command (no parameter) displays the date of
the version of the software being used.
\item[script] ``script file-name'' has the effect of the ML command
``use file-name.mk2''; files with the extension {\tt .mk2} are to be
recognizable as proof scripts. {\tt script} now uses a file parser
independent of ML, also exploited by the {\tt noml} alternative
interface command.
\item The next two commands control prover output in ways that are
sometimes convenient when running proof scripts.
\item[setpause, setnopause:] {\tt setpause} causes the prover to pause
whenever it issues an error message. This is useful when running
proof scripts. {\tt setnopause} reverses this effect.
\item[bequiet, speakup:] {\tt bequiet} suppresses all output usually
seen from proof scripts, except error messages. {\tt speakup}
reverses this effect, which is necessary if normal prover function is
to be restored.
\end{description}
\newpage
\subsubsection{Environment Starting and other Modification Commands}
\begin{description}
\item[start] ``start term'' sets both sides of current equation to
``term'' and dependencies to null. It backs up the previous
environment onto the desktop and sets the ``current theorem''
environnment variable to the null string (walk command ``s'').
\item[startfor] ``startfor thm term'' acts as ``start'', except that
it sets the ``current theorem'' variable to ``thm''.
\item[starthere] ``starthere'' backs up the environment, then sets
both sides of the current equation to the current whole term and
dependencies to null.
\item[startover] ``startover'' backs up the environment, wipes
dependencies, and sets both sides of the current equation to its
left side (usually the term one started with). (walk command ``B'').
\item[workback] ``workback'' interchanges the left and right sides of
the current equation (so one will be looking at the original term
entered) (walk command ``b'').
\item[getleftside] see editing commands
\item[getrightside] see editing commands
\item[autoedit] see editing commands
\end{description}
\newpage
\subsubsection{Environment Display Commands}
\begin{description}
\item[look] View the whole term (above) and the current subterm
(below) (walk command ``?'').
\item[lookback] View the left side of the current equation.
\item[lookhere] View the current subterm alone.
\item[lookhyp] ``lookhyp integer'' displays the hypothesis indexed by
``integer''; see the discussion of the hypothesis function of the
prover above.
\item[lookhyps] Displays all hypotheses.
\item[seedeps] See the dependencies of the theorem under construction.
(walk command ``j'').
\end{description}
\newpage
\subsubsection{Display Control Commands}
\begin{description}
\item[declareprefix] ``declareprefix infix default'' causes the operator
``infix'' to be usable as a unary prefix, with invisible left
subterm ``default'' (walk command ``X'').
\item[deprefix] ``deprefix infix'' removes any default prefix
assigned to ``infix''.
\item[setprecedence] ``setprecedence infix number'' causes the operator
``infix'' to be assigned the precedence ``number'' (this parameter
is not enclosed in double quotes). Non-negative integers are
permitted as precedences. Odd precedences associate to the left;
even precedences to the right. Precedence information is saved in
saved theories; however, theorems are recorded in theory files in
the old syntax. The default precedence for any operator is 0 (walk
command ``I'').
\item[showprecs] Displays precedence information in a tabular format;
also gives left or right association information. (walk command ``k'')
\item[clearprecs] Sets all precedences back to 0 (so we are back to
the original precedence convention of the prover; no precedence and
all operations associate to the right).
\item[setline] ``setline number'' sets the point at which lines wrap
around in the display to ``number'' (the argument is {\em not\/}
enclosed in double quotes). (walk command ``K'')
\item[setdepth] Sets the depth of the display (terms below this depth
will not be seen---they are replaced with some impossible string).
The form of the command is ``setdepth number'', and the number is
not enclosed in double quotes. If the number is negative (as it
usually is), all subterms will be displayed.
\item[nodepth] Sets depth to a negative value so that all subterms are
displayed.
\item[declaretypeinfo] {\tt declaretypeinfo variable type} enables the
term {\tt :?variable} (the variable parameter is currently entered
without leading {\tt ?}, though this will probably be changed) to be
read as {\tt type : ?variable}; this allows implicit typing of
variables. This information is not currently saved in theories.
\item[detypeinfo] {\tt detypeinfo variable} turns off any implicit
typing of the variable {\tt ?variable}.
\end{description}
\newpage
\subsubsection{Movement Commands}
\begin{description}
\item[right] Changes current subterm to right subterm of previous
current subterm. Takes the immediate proper subterm of a constant
function term (walk command ``r'').
\item[left] Changes current subterm to left subterm of previous
current subterm. Takes the immediate proper subterm of a constant
function term (walk command ``l'').
\item[up] Changes current subterm to the smallest subterm properly
including the previous current subterm (walk command ``u'').
\item[top] Changes current subterm to the whole term (walk command ``t'').
\item[upto] ``upto term'' repeats ``up'' until a term is encountered
which matches ``term'' in structure, or until the whole term is
reached. (walk command ``o'')
\item[downtoleft] ``downtoleft term'' goes to the leftmost term
matching ``term'' in structure. (walk command ``$<$'')
\item[downtoright] As ``downtoleft'', except rightmost. (walk command ``$>$'')
\item[New movement commands:] Commands {\tt uptols }, {\tt
uptors }, {\tt dlls }, {\tt dlrs },
{\tt drrs }, and {\tt dlrs } enable movement to
places where theorems can be applied. Commands beginning with {\tt
upto-} implement ``upto''; commands beginning with {\tt dr-} and
{\tt dl-} implement ``downtoleft'' and ``downtoright''; the suffix
indicates ``left side'' or ``right side'' of theorem.
\end{description}
\newpage
\subsubsection{Declaration Commands}
\begin{description}
\item[showdec] ``showdec term'' displays the declaration information
for the constant or operator ``term''. (relative types of left and
right subterms and default left subterm if used as a prefix). (walk
command ``$\sim$''). Displays comments.
\item[showalldecs] Shows all declarations, one per keystroke on Enter.
Use any key other than Enter to break out.
\item[declaretypedinfix] ``declaretypedinfix left-type right-type
name'' declares the operator ``name'' with left and right types
``left-type'' and ``right-type'' (the types are integers without
double quotes; negative integers are prefixed with \~ in ML). (walk
command ``Y'').
\item[declareconstant] ``declareconstant name'' declares a constant
with name ``name''. (walk command ``c'').
\item[declareinfix] ``declareinfix name'' declares a type-level operator
(both relative types 0) called ``name''. (walk command ``d'').
\item[declareunary] ``declareunary name'' declares a type-level operator
called ``name'' usable as a prefix with default left subterm {\tt
defaultprefix}. (walk command ``1'').
\item[newcomment] ``newcomment name comment'' attaches the comment
``comment'' to the declared object named by ``name''. This supersedes
older comments.
\item[appendcomment] As newcomment, except that comments are appended
to previous comments.
\item[makeunary] ``makeunary name'' assigns the default left argument
``defaultprefix'' to an operator which may already exist (it does not
attempt a declaration).
\item[declareopaque] ``declareopaque infix'' causes the operator
``infix'' to be treated as referentially opaque for purposes of
abstraction. (walk command ``y'') New restriction: this command
can be used only to introduce a new operator; previously existing
operators cannot be made opaque.
\item[makescin, makescout] ``makescin name theorem'' causes the
function constant or operator named by ``name'' to be declared as having
input (resp. output) restricted to absolute types (strongly Cantorian
sets) with consequent weakening of stratification requirements. The
theorem named by ``theorem'' must witness this fact.
\item[declarepretheorem] ``declarepretheorem thm'' reserves ``thm''
as the name of a future theorem (needed for the proofs of recursive
tactics). (walk command ``P'').
\item[forget] ``forget name'' eliminates declarations of all theorems
which depend on or refer to theorem ``name''. It will eliminate an
entire module if any component depends on ``name''.
\end{description}
\newpage
\subsubsection{Theorem Display Commands}
\begin{description}
\item[thmdisplay] ``thmdisplay thm'' causes theorem ``thm'' to be
displayed. (walk command ``D''). Displays comments. Displays
names but not contents of module components (hidden theorems).
\item[showalltheorems] Causes a theorem to be displayed with each
keystroke (most recently proved first). Any key other than Enter
breaks out. (walk command ``T'' has a similar function; use ``l''
and ``r'' to scan through the theorem list after issuing the ``T''
command)
\item[moddisplay, showallmodules] As thmdisplay, showalltheorems,
except that module contents are displayed.
\item[seeprogram] Displays the theorem (if any) bound to a constant
or operator.
\item[seeallprograms] Displays all programs.
\item[seedef] ``seedef defined'' displays the defining theorem of the
constant or operator ``defined''.
\item[seealldefs] Displays all definitions.
\item[showrelevantthms] Displays theorems applicable to the current
subterm. Will not display theorems which apply to {\em all\/}
terms. Hitting any key other than Enter will break out.
\item[showallaxioms] Displays all axioms and definitions.
\end{description}
\newpage
\subsubsection{Proof Commands}
\begin{description}
\item[axiom] ``axiom name left-side right-side'' causes an axiom with
name ``name'', left side ``left-side'' and right side ``right-side''
to be created.
\item[interaxiom] If the current subterm is an equation, ``interaxiom
axiom'' introduces it as an axiom called ``axiom''. (walk command ``x'').
\item[makeanaxiom] ``makeanaxiom thm'' makes theorem ``thm'' an axiom.
\item[proveanaxiom] If the current equation proves an axiom ``axiom''
(verbatim) from other axioms, the command ``proveanaxiom axiom''
will reset dependencies globally to reflect this fact.
\item[prove] ``prove thm'' stores the current equation as ``thm''.
``thm'' may have variable parameters. (walk command ``p'').
\item[autoprove] Stores the current equation as a theorem with name
the ``current theorem'' environment variable.
\item[reprove] See editing commands
\item[autoreprove] See editing commands
\item[redefinition commands] See the newer Command Reference.
\item[module system commands] See the newer Command Reference.
\end{description}
\newpage
\subsubsection{Editing Commands}
\begin{description}
\item[applythm] ``applythm thm'' applies the theorem ``thm'', but
without executing any embedded theorems introduced. ``thm'' may be
supplied with parameters.
\item[applyconvthm] As ``applythm'', but in the reverse direction.
\item[applyenv, applyconvenv] As the previous two commands, but using
saved environments.
\item[reprove] ``reprove thm'' allows the current equation to be
stored as ``thm'', where ``thm'' already exists, with the sole
constraint being that the new version of ``thm'' must depend on no
more axioms than the old version of ``thm''.
\item[autoreprove] Same effect as ``reprove'', with the same
restriction, with the argument set to the ``current theorem''
environment variable.
\item[getleftside] ``getleftside thm'' gets left side of theorem as
both sides of current equation and sets ``current theorem'' to
``thm''. Saves old environment on desktop. (walk command ``L'').
\item[getrightside] As ``getleftside'', but the right side of the
theorem. (walk command ``R'').
\item[autoedit] ``autoedit theorem'' sets the current equation to
``theorem'', and sets the current theorem environment variable to
``theorem'' (with parameters, if any). (saves old environment on
desktop). (walk command ``E'').
\end{description}
\newpage
\subsubsection{Definition Commands}
\begin{description}
\item[defineconstant] ``defineconstant left-side right-side'', where
left-side has the correct format (an undeclared constant possibly
applied to lists of parameters) declares the new constant read from
``left-side'' and introduces a theorem with null dependencies of the
form ``left-side = right-side'', with the same name as the constant
defined.
\item[defineinfix] ``defineinfix thm left-side right-side'' where
``left-side'' has a top level undeclared operator (infix or prefix)
taking as arguments and possibly applied to parameters or lists of
parameters, declares the new operator (as type level) and creates a
theorem called ``thm'' equating ``left-side'' and ``right-side''.
\item[definetypedinfix] ``definetypedinfix thm left-type right-type
left-side right-side'' has the same effect as ``defineinfix thm
left-side right-side'', except that the left and right sides of the
new operator are assigned relative types ``left-side'' and
``right-side'': these arguments are numerical and are {\em not\/}
put in double quotes.
\item[defineitconstant] Like ``defineconstant'', except that the
right side is the current subterm.
\item[defineitinfix] Like ``defineinfix'', except that the right side
is the current subterm.
\item[defineittypedinfix] Like ``definetypedinfix'', except that the
right side is the current subterm.
\item[defineopaque] Defines opaque operators. Same arguments as
``definetypedinfix'', except that types are not needed.
\end{description}
\newpage
\subsubsection{Rule Introduction Commands}
\begin{description}
\item[ruleintro] ``ruleintro theorem'' introduces an embedded
occurrence of ``theorem'' at the current subterm. (walk command ``i'').
\item[revruleintro] As ``ruleintro'', except that the theorem is to
be applied in the reverse direction, (walk command ``v'').
\item[undelay] Remove a ``delay'' prefix from the current subterm.
\item[delay] Introduce a ``delay'' prefix to the current subterm.
\item[unlazy, lazy] As ``undelay'' or ``delay'' with the full
laziness prefix.
\item[altrule] Converts an embedded theorem at the current subterm
between the normal and alternative forms. (walk command ``f'').
\item[droprule] Drops an embedded theorem from top of current subterm
(if there is one).
\item[proveprogram] ``proveprogram name thm'' causes the theorem
``thm'' to be applied wherever the constant or operator ``name''
appears in a context defined by the left side of ``thm''. Simulates
functional programming.
\item[deprogram] ``deprogram name'' removes the theorem bound to
``name''.
\end{description}
\newpage
\subsubsection{Assignment Commands}
\begin{description}
\item[assign] ``assign variable term'' replaces the variable
``variable'' with the term ``term'' throughout the equation under
construction. (walk command ``a'').
\item[assigninto] ``assigninto variable term'' replaces the variable
``variable'' with the two sides of the equation under construction
to produce a new equation under construction. (walk command ``n'').
\item[assignright] ``assignright var thm'' assigns the right side of
theorem ``thm'' as the value of the variable ``var'' throughout the
current and initial terms of the theorem under construction.
\item[assignleft] As ``assignright'', but the left side of ``thm''.
\end{description}
\newpage
\subsubsection{Tactic Language Interpreter}
\begin{description}
\item[steps] The trace/debug command for the tactic language; executes
one step in current subterm each time a key is pressed. Hit ``q''
to break out; terminates automatically if the term stabilizes.
\item[onestep] More limited trace/debug command; executes just one
step in current subterm.
\item[execute] The tactic language interpreter; executes all embedded
theorems or tactics in the current subterm, and all embedded
theorems or tactics newly invoked in this way, using an aggressive
depth-first strategy. (walk command ``e'').
\item[Warning (for all these commands):] If you break out of any
tactic interpreter using Control-C, use the ``top'' command to reset
environment variables.
\end{description}
\newpage
\subsubsection{Theory Desktop Commands}
\begin{description}
\item[scantheories] Allows one to scan the theories on the desktop;
hit ``h'' in any ``scan'' command to see the local commands to move
from item to item or to quit.
\item[cleartheories] Erases all theories on the desktop.
\item[droptheory] ``droptheory theory'' drops theory ``theory'' from
the desktop.
\item[gettheory] ``gettheory theory'' loads the theory ``theory''
from the desktop (resetting the ``current theory'' environment
variable). It automatically backs up the current theory unless the
current theory is the one being loaded.
\item[backuptheory] Saves current theory on desktop.
\end{description}
\newpage
\subsubsection{Environment Desktop Commands}
\begin{description}
\item[getenv] ``getenv env'' allows one to get the environment
(equation under construction in the current theory) called ``env''.
The current environment is saved unless it is called ``env'' itself.
\item[saveenv] ``saveenv env'' allows one to save the current
equation as ``env''.
\item[backupenv] Saves the current environment using the ``current
theory'' environment variable as name; if the current theory is
null, calls it ``backup''.
\item[dropenv] ``dropenv env'' erases environment called ``env''.
\item[clearenvs] Erases all environments on the desktop.
\item[loadsavedenvs] Loads save environments in the theory onto the
desktop; one must hit return for each theory. One can use this in
combination with clearenvs to unclutter theory files.
\item[scanenvs] Allows one to scan the environments on the desktop (in
the current theory). Type ``h'' after typing any ``scan'' command
to see the local commands to move from item to item or to quit.
\end{description}
\newpage
\subsubsection{View Management Commands}
\begin{description}
\item[showviews] This command shows all views in a tabular format.
\item[showview] ``showview view'' shows the view ``view'' in a
tabular format (translation of terms in the source theory to terms
in the target theory; views are stored as part of the source
theory).
\item[viewofin] ``viewofin ident view'' shows the translation of
``ident'' in the view ``view''.
\item[clearviews] Erases all views.
\item[dropaview] ``dropaview view'' erases the view ``view''.
\item[dropfromview] ``dropfromview view ident'' drops the translation
of ``ident'' from the view ``view''.
\item[viewasselfin] ``viewasselfin ident view'' causes ``ident'' to
be translated by ``ident'' in view ``view''.
\item[viewasin] ``viewasin name1 name2 view'' causes ``name1'' to be
translated as ``name2'' in view ``view'' (name1 must be an
identifier in the current theory).
\item[declareview] ``declareview view'' creates a view with some
default content.
\end{description}
\newpage
\subsubsection{Theorem Export Commands}
\begin{description}
\item[exportthm] ``exportthm view prefix theorem remote-theory''
exports ``theorem'' and all iteratively embedded theorems to
``remote-theory'', using ``view'' to set up the translation, and
adding ``prefix'' to each theorem name. If the view does not check,
no changes will be made. ``\#'' is used as the default prefix for
operators which are created during the export. ``remote-theory'' must
be on the desktop.
\item[exportthmlist] ``exportthmlist view prefix1 prefix2 theorems
remote-theory'' is a more general version of exportthm above: the
differences are that ``theorems'' is a list of theorems to be
exported, and that prefix1 (for theorem and constant names) is
supplemented with prefix2 (for operator names).
\subsubsection{Command Abbreviations}
\begin{verbatim}
(* /////////// ml declarations to change the names of commands //////////// *)
fun ri x = ruleintro x;
fun rri x = revruleintro x;
fun ari x = altruleintro x;
fun arri x = altrevruleintro x;
fun s x = start x;
fun p x = prove x;
fun rp x = reprove x;
fun ex() = execute();
fun di x = declareinfix x;
fun du x = declareunary x;
fun a x = axiom x;
fun dpt x = declarepretheorem x;
fun sp x n = setprecedence x n;
fun td x = thmdisplay x;
fun wb() = workback();
fun ae x = autoedit x;
fun tri x = targetruleintro x;
fun smt x = showmatchthm x;
fun srt() = showrelevantthms();
fun dti s t = declaretypeinfo s t;
fun uti s = detypeinfo s;
fun sat() = showalltheorems();
\end{verbatim}
\end{description}
%\end{description}
\end{document}