perm filename ABST.TEX[BIB,CSR]1 blob sn#521328 filedate 1980-07-11 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00005 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input macro.tex[tur,cjs]
C00003 00003	\sect{STAN-CS-79-747 (AIM-329, AD-A076 872)}
C00022 00004	\sect{STAN-CS-80-806}
C00030 00005	% ∞∞∞∞∞∞∞∞∞
C00031 ENDMK
C⊗;
\input macro.tex[tur,cjs]

\vfill
\endpartx
\sect{STAN-CS-79-747 (AIM-329, AD-A076 872)}

{\ic Using Patterns and Plans to Solve Problems and
Control Search} by David Edward Wilkins (Thesis, 264 pages, June 1979)

The type of reasoning done by human chess masters has not been done by
computer programs.  The puropose of this research is to investigate the
extent to which knowledge can replace and support search in selecting a
chess move and to delineate the issues involved.  This has been carried
out by constructing a program, PARADISE (PAtternRecognition Applied to
DIrecting SEarch), which finds the best move in tatically sharp middle
game positions from the games of chess masters.

PARADISE plays chess by doing a large amount of static, knowledge-based
reasoning and constructing a small search tree (tens of hundreds of nodes)
to confirm that a particular move is best, both characteristics of human
chess masters.  A ``Production- Language'' has been developed for expressing
chess knowledge in the form of productions, and the knowledge base
contains about 200 productions written in this language.  The actions of
the rules post concepts in the data base while the conditions match
patterns in the chess position and data base.  The patterns are complex to
match (search may be involved).  The knowledge base was built
incrementally, relying on the system's ability to explain its reasoning
and the ease of writing and modifying productions.  The productions are
structured to provide ``concepts'' to reason with, methods of controlling
pattern instantiation, and means of focusing the system's attention on
relevant parts of the knowledge base.  PARADISE knows why it believes its
concepts and may reject them after more analysis.

PARADISE uses its knowledge base to control the search, and to do a static
analysis of a new position which produces concepts and plans.  Once a plan
is formulated, t guides the tree search for several ply by focussing the
analysis at each node on a small group of productions.  Expensive static
analyses are rarely done at new positions created during the search.
Plans may be elaborated and expanded as the search proceeds These plans
(and the use made of them) are more sophisticated than those in previous
AI systems.  Through plans, PARADISE uses the knowledge applied during a
previous analysis to control the search for many nodes.

By using the knowledge base to help control the search, PARADISE has
developed an efficient best-first search which uses different startegies
at the top level.  The value of each move is a range which is gradually
narrowed by doing best-first searches until it can be shown that one move
is best.  By using a global view of the search tree, information gathered
during the search, and information produced by static analyses, the
program produces enough terminations to force convergence of the search.
PARADISE does not place a depth limit on the search (or any other
artificial effort limit).  The program incorporates many cutoffs that are
not useful in less knowledge-oriented programs (e.g., restrictions are
placed on concatenation of plans, accurate forward prunes can be made on
the basis of static analysis results, a causality facility determines how
a move can affect a line already searched using patterns generated during
the previous search).

PARADISE has found combinations as deep as 19 ply and performs well when
tested on 100 standard problems.  The modifiability of the knowledge base
is excellent.  Developing a search strategy which converges and using a
large knowledge base composed of patterns of this complexity to achieve
expert performance on such a complex problem is an advance on the state of
the art.

\vfill
\endpartx

\sect{STAN-CS-79-751 (AIM-330)}

{\ic The Modal Logic of Programs} by Zohar Manna and
Amir Pnueli (36 pages, September 1979)

We explore the general framework of Modal Logic and its applicability to
program reasoning.  We relate the basic concepts of Modal Logic to the
programming environment:  the concept of ``world'' corresponds to a program
state, and the concept of ``accessibility relation'' corresponds to the
relation of derivability between states during execution.  Thus we adopt
the Temporal interpretation of Modal Logic.  The variety of program
properties expressible withing the modal formalism is demonstrated.

The first axiomatic system studied, the {\ic sometime system}, is adequate
for proving total correctness and `eventuality' properties.  However, it
is inadequate for proving invariance properties.  The stronger {\ic nexttime
system} obtained by adding the {\ic next} operator is shown to be adequate
for invariances as well.

\vfill
\endpartx

\sect{STAN-CS-79-755 (AIM-331)}

{\ic Efficiency Consideration in Program Synthesis:
A Knowedge-Based Approach} by Elaine Kant (Thesis, 160 pages, July 1979)

This thesis presents a framework for using efficiency knowledge to guide
program synthesis when stepwise refinement is the primary synthesis
technique. The practicality of the framework is demonstrated via the
implementation of a particular search-control system called LIBRA.  The
framework is also one model for how people write and analyze programs.

LIBRA complements an interactive program-synthesis project by adding
efficiency knowledge to the basic coding knowledge and by automating the
implementation-selection process.  The program specifications include size
and frequency notes, the performance measure to be minimized, and some
limits on the time and space of the synthesis task itself.  LIBRA selects
algorithms and data representations and decides whether to use
``optimizing'' transformations by applying knowledge about time and storage
costs of program components.  Cost estimates are obtained by incremental,
algebraic program analysis.

Some of the most interesting aspects of the system are explicit rules
about plausible implementations, constraint setting, multiple-exit-loop
analysis, the comparison of optimistic and achievable cost estimates to
identify important decisions, and resource allocation and the basis of
importance.  LIBRA has automatically guided the construction of programs
that classify, retrieve information, sort, and computer prime numbers.

\vfill
\endpartx

\sect{STAN-CS-79-762 (AIM-332, AD-A083 229)}

{\ic METAFONT: A System for Alphabet Design} by
Donald Knuth (110 pages, September 1979)

This is the user's manual for METAFONT, a companion to the TEX typesetting
system.  The system makes it fairly easy to define high quality fonts of
type in a machine-independent manner; a user writes ``programs'' in a new
language developed for this purpose.  By varying parameters of a design,
an unlimited number of typefaces can be obtained from a single set of
programs.  The manual also sketches the algorithms used by the system to
draw the character shapes.

\vfill
\endpartx

\sect{STAN-CS-79-772 (AIM-333)}

{\ic Building Program Models Incrementally from
Informal Descriptions} by Brian P. McCune (Thesis, 146 pages, October 1979)

Program acquisition is the transformation of a program specification into
an executable, but not necessarily efficient, program that meets the given
specification.  This thesis presents a solution to one aspect of the
program acquisition problem:  the incremental construction of program
models from informal descriptions.  The key to the solution is a framework
for incremental program acquisition that includes (1) a formal language
for expressing program fragments that contain informalities, (2) a control
structure for the incremental recognition and assimilation of such
fragments, and (3) a knowledge base of rules for acquiring programs
specified with informalities.

The thesis describes a LISP based computer system called the {\ic Program
Model Builder} (abbreviated ``PMB''), which receives informal program
fragments incrementally and assembles them into a very high level program
model that is complete, semantically consistent, unambiguous, and
executable.  The program specification comes in the form of partial
program fragments that arrive in any order and may exhibit such
informalities as inconsistencies and ambiguous references.  Possible
sources of fragments are a natural language parser or a parser for a
surface form of the fragments.  PMB produces a program model that is a
complete and executable computer program.  The {\ic program fragment
language} used for specifications is a superset of the language in which
program models are built.  This {\ic program modelling language} is a very
high level programming language for symbolic processing that deals with
such information structures as sets and mappings.

PMB has expertise in the general area of simple symbolic computations, but
PMB is designed to be independent of more specific programming domains and
particular program specification techniques at the user level.  However,
the specifications given to PMB must still be algorithmic in nature.
Because of the very high level nature of the program model produced, PMB
also operates independently from implementation details such as the target
computer and low level language.  PMB has been tested both as a module of
the PSI program synthesis system and independently.  Models built as part
of PSI have been acquired via natural language dialogs and execution
traces and have been automatically coded into LISP by other PSI modules.
PMB has successfully built a number of moderately complex programs for
symbolic computation.

By the design the user is allowed to have control of the specification
process.  Therefore PMB must handle program fragments interactively and
incrementally.  Interesting problems arise because these informal
fragments may arrive in an arbitrary order, may convey an arbitrarily
samll amount of new information, and may be incomplete, semantically
inconsistent, and ambiguous.  To allow the current point of focus to
change, a {\ic program reference language} has been designed for expressing
patterns that specify what part of the model a fragment refers to.
Various combinations of syntactic and semantic reference patterns in the
model may be specified.

The recognition paradign used by PMB is a form of subgoaling that allows
the parts of the program to be specified in an order chosen by the user,
rather than dictated by the system.  Knowledge is represented as a set of
data driven antecedent rules of two types, {\ic response rules} and
{\ic demons}, which are triggered respectively by either the input of new
fragments or changes in the partial program model.  In processing a
fragment, a response rule may update the partial program model and create
new subgoals with associated response rules.  To process subgoals that are
completely internal to PMB, demon rules are created that delay execution
until their prerequisite information in the program model has been filled
in by response fules or perhaps other demons.

PMB has a knowledge base of rules for handling modelling language
constructs, processing informalities in fragments, monitoring the
consistency of the model, and transforming the program to canonical form.
Response rules and simple demons are procedural.  {\ic Compond demons} have
more complex antecedents that test more than one object in the program
model.  Compond demons are declarative antecedent patterns that are
expanded automatically into procedural form.

\sect{STAN-CS-80-806}

{\ic On the Approximate Solution of Hyperbolic Initial-Boundary Value Problems}
by William M. Coughran, Jr. (Thesis, 177 pages, June 1980)

Hyperbolic initial-boundry value problems arise in a number of scientific
disciplines, such as meteorology, ocanography, geophysics, aerodynamics, acoustics,
and magnetohydrodynamics.  These problems usually cannot be solved analytically,
so approximate methods must be used.  Unfortunately, the construction of stable
finite difference approximations is a subtle matter, which often confuses the
practitioner; the existing theories for establishing the well-posedness of
continuous initial-boundary value problems and the stability of discrete analogs
involve the verification of complicated algebraic conditions.  Moreover, the
stability theory fo discrete initial-boundary value problems in more than one
space dimension is not well developed.

In this thesis, the existing stability theory for discrete initial-boundary value
problems, which has only been applied to (essentially) salar model problems, is
used to analyze the stability of some $2 \times 2$ model problems, not written
in characteristic variables; it is noted that the most accurate interior/boundary
difference scheme combinations are the least stable to perturbations in the
coefficients.  (A practical numerical procedure for verifying the stability of
discrete initial-boundary value problems is also introduced.)  The stability
results for $2 \times 2$ systems can be used in the stability analysis of larger
systems where characteristics occur only singly and in pairs; in particular,
discretizations of the wave equation, the shallow  water equations, and the Eulerian
equations for gas dynamics, which involve boundary conditions written in ``natural''
variables, are examined.  The stability theory is also extended to multi-dimensional
initial-bondary value problems by means of the concept of ``tangential dissipativity'';
as an application, a tangentially dissipative leap-frog metho is shown to be stable
with Euler boundary conditions for a two-dimensional wave equation problem.  The
viability and limitations of the theory are demonstrated with some computational
experiments.  Finally, combining stability results with accuracy considerations,
various approximations and boundary conditions are ranked.

\vfill
\endpartx

\sect{STAN-CS-80-807}

{\ic Path-Regular Graphs} by David W. Matula and Danny Dolev
(39 pages, June 1980)

A graph is vertex-[edge-]path-regular if a list of shortest paths, allowing
multiple copies of paths, exists where every pair of vertices are the endvertices
of the same number of paths and each vertex [edge] occurs in the same number of
paths of the list.  The dependencies and independencies between the various
path-regularity, regularity of degree, and symmetry properties are investigated.
We show that every connected vertex-[edge-]symmetric graph is vertex-[edge-]path-regular,
but not conversely.  We show that the product o any two vertex-path-regular
graphs is vertex-path-regular but not conversely, and the iterated product
$G \times G \times \ldootsm \times G$
is edge-path-regular if and only if $G$ is
edge-path-regular.  An interpretation of path-regular graphs is given regarding
the efficient design of concurrent communication networks.

\vfill
\endpartx

\sect{STAN-CS-80-809 (AIM-338)}

{\ic An Extention of Screw Theory and its Application to the Automation of
Industrial Assemblies} by Jorgan S. Ohwovoriole (Thesis, 186 pages, April 1980)

Interest in mathematical models that adequately predict what happens in the
process of assembling industrial parts has heightened in recent times.  This is
a result of the desire to automate the assembly process.  Up to this point there
has not been much success in deriving adequate mathematical models of the assembly
process.

This thesis is an attempt to develop mathematical models of parts assembly.
Assembly involves motion of bodies which generally contact each other during
the process.  Hence, we study the kinematics of the relative motion of contacting
bodies.

Basic to the theory of assembly is the classical theory of screws which,
however, required substantial extensions for this application.  The thesis begins
with a review of basic screw theory, including line geometry and reciprocal screw
systems, and new and more general derivations of some of these screw systems.
We then extend the screw theory by introducing such concepts as ``repelling'' and
``contrary'' screw pairs, and ``total freedom.''

Finally, we give a method of characterizing assemblies of industrial parts.
Using the extended screw theory, we then analyze the ``general peg-in-hole assembly''
and subsequently give a mathematical description of this particular assembly.

\vfill
\endpartx

% ∞∞∞∞∞∞∞∞∞
\vfill\end
% ∞∞∞∞∞∞∞∞∞