perm filename FIVE[BIB,CSR] blob sn#465940 filedate 1979-08-14 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00021 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.require "setup.abs[bib,csr]" source file
C00004 00003	.begin center
C00007 00004	.begin nofill
C00013 00005	.begin nofill
C00019 00006	.begin nofill
C00021 00007	.begin nofill
C00023 00008	.begin nofill
C00029 00009	.begin nofill
C00031 00010	.begin nofill
C00033 00011	.begin nofill
C00034 00012	.begin nofill
C00038 00013	.begin nofill
C00043 00014	.begin nofill
C00044 00015	.begin nofill
C00047 00016	.begin nofill
C00050 00017	.begin nofill
C00054 00018	.begin nofill
C00056 00019	.begin nofill
C00058 00020	.begin nofill
C00061 00021	. <<SOL reports>>
C00074 ENDMK
C⊗;
.require "setup.abs[bib,csr]" source file;
.at "⊗" ⊂"%5αQ%*"⊃
.at "!!" txt ";"	⊂
.("txt"[1]&"∩[%2"&"txt"[2]&"]&∪[ "&"txt"[3]&"]");
.  ⊃
.begin center
S T A N F O R D   U N I V E R S I T Y
MOST RECENT COMPUTER SCIENCE REPORTS - 1979
.end

No. 5↔August

@Listed here are abstracts of the most recent reports published by the
Department of Computer Science at Stanford University.

@TO REQUEST REPORTS:  Check the appropriate places on the enclosed order
form, and return the entire order form page (including mailing label) by
October 26, 1979.  In many cases we can print only a limited number of copies,
and requests will be filled on a first come, first serve basis.  If the code
(FREE) is printed on your mailing label, you will not be charged for hardcopy.
This exemption from payment is limited primarily to libraries.  (The costs
shown include all applicable sales taxes.  PLEASE SEND NO MONEY NOW, WAIT UNTIL 
YOU GET AN INVOICE.)

@ALTERNATIVELY:  Copies of most Stanford CS Reports may be obtained by writing
(about 2 months after the MOST RECENT CS REPORTS listing) to NATIONAL TECHNICAL
INFORMATION SERVICE, 5285 Port Royal Road, Springfield, Virginia 22161.
Stanford Ph.D. theses are available from UNIVERSITY MICROFILMS, 300 North
Zeeb Road, Ann Arbor, Michigan 48106.

@PLEASE NOTE:  In an effort to cut our costs so that we may continue to publish
Stanford Computer Science Reports, we will be making a format change.  Reports
over 100 pages in length will have their page size reduced so that 4 pages can
be printed on 1 sheet of paper.
.next page
.begin nofill
AIM-328↔(STAN-CS-79-743)
@"GOAL:  A Goal Oriented Command Language for Interactive Proof Construction"
Author:  Juan B. Bulnes-Rozas↔(Thesis)
175 pages↔Available in microfiche only.
.end

@This thesis represents a contribution to the
development of practical computer systems for interactive construction of
formal proofs. 
Beginning with a summary
of current research in automatic theorem proving,
goal oriented systems, proof checking, and program verification,
this work aims at bridging the gap between proof checking and theorem proving.

@Specifically,
it describes a system GOAL for the First Order Logic proof checker FOL.
GOAL helps the user of FOL in the 
creation of long proofs in
three ways: 1) as a facility for structured, top down proof
construction; 2) as a semi-automatic theorem prover; and
3) as an extensible environment for the programming of
theorem proving heuristics.

@In GOAL, the user defines top level goals. These are 
then recursively decomposed into subgoals.
The main part of
a goal is a well formed formula that one desires to prove, but they
include assertions, simplification sets, and other information.
Goals can be tried by three different types of elements: 
%2matchers%1, %2tactics%1, and %2strategies%1. 

@The %2matchers%1 attempt to prove a goal directly -that is without
reducing it into subgoals- by calling decision
procedures of FOL. Successful application of a matcher  causes
the proved goal to be added to the FOL proof.

@A %2tactic%1 reduces a goal into one or more subgoals. Each tactic
is the inverse of some inference rule of FOL; the goal structure
records all the necessary information so that the appropriate inference
rule is called when all the subgoals of a goal
are proved. In this way the goal tree unwinds automatically, producing
a FOL proof of the top level goal from the proofs of its
leaves.

@The %2strategies%1 are programmed sequences of applications of tactics and 
matchers. They do not interface with FOL directly.
Instead, they simulate a virtual user of FOL.
They can call the tactics, matchers, other strategies, or
themselves recursively.
The success of this approach to theorem proving
is documented by one heuristic strategy that has proved a number
of theorems in Zermelo-Fraenkel Axiomatic Set Theory.
Analysis of this strategy leads to a discussion of some trade offs
related to the use of %2assertions%1 and %2simplification sets%1 in
goal oriented theorem proving.

@The user can add new tactics, matchers, and strategies to GOAL. These
additions cause the language to be extended in a uniform way.
The description of new strategies is
done easily, at a fairly high level, and no faulty
deduction is possible.
Perhaps the main contribution of GOAL is
a high level environment for easy programming of
new theorem proving applications in the First Order Predicate Calculus.
.next page
@The thesis ends with 
two appendixes presenting complete proofs of Ramsey's theorem in axiomatic Set
Theory and of the correctness of the Takeuchi function.

@(It is planned that both FOL and GOAL will be made available over
the ARPANET this year. Inquiries regarding their use should be addressed
to Dr. R. Weyhrauch at the Stanford Artificial Intelligence Laboratory,
SU-AI).
.next page
.begin nofill
STAN-CS-79-744↔(CSL TR-172)
@"Performance of Update Algorithms for Replicated Data in a Distributed Database"
Author:  Hector Garcia-Molina↔(Thesis)
320 pages↔Cost:  $6.20
.end

@In this thesis we study the performance of update algorithms for replicated
data in a distributed database.  In doing so, we also investigate several other
related issues.

@We start by presenting a simple model of a distributed database which is
suitable for studying updates and concurrency control.  We also develop a
performance model and a set of parameters which represent the most important
performance features of a distributed database.

@The distributed database models are used to study the performance of update
algorithms for replicated data.  This is done in two steps.  First the algorithms
are analyzed in the case of completely replicated databases in a no failure,
update only environment.  Then, the restrictions that we made are eliminated one
at a time, and the impact on the system performance of doing this is evaluated.
For the first step, we develop a new technique for analyzing the performance of
update algorithms.  This iterative technique is based on queueing theory.
Several well known update algorithms are analyzed using this technique.  The
performance results are verified through detailed simulations of the algorithms.
The results show that centralized control algorithms nearly always perform
better than the more popular distributed control algorithms.  This is a surprising
result because the distributed algorithms were thought to be more efficient.

@The performance results are also useful for identifying the critical system
resources.  This insight leads to the development of several new update algorithms
with improved performance.  In particular, the MCLA-h algorithms which we
present performs better than all other update algorithms in most cases of interest.
The MCLA-h algorithm is based on the new concept that "hole lists" which are
used to increase the parallelism of updates.  Several varations of the MCLA-h
algorithms are also analyzed.

@In order to investigate the validity of our results in a general system, we relax
the assumptions made initially in the performance studies.  We show that it is
@possible to make a centralized control algorithm (like the MCLA-h) resilient in
the face of many types of failures.  We show that the cost in terms of performance
for doing this is roughly the same for all algorithms and thus, the original
performance comparisons are still valid in the case of crash resistant algorithms.

@We analyze distributed databases with partitioned data and multiple independent
control mechanisms (called controllers).  We describe transaction processing
in this environment and we discuss how the performance results of the fully
duplicated case can be used to design update algorithms in this environment.
.next page
@We demonstrate how updates that do not specify their base set initially can be
processed.  We present three fundamental alternatives for processing these
updates and we analyze their performance.

@The operation of read only transactions (queries) and their interaction with
the update algorithms has not been covered in the literature.  In this thesis,
we classify queries to free, consistent and current queries, and we present
algorithms for processing each type of query under the different update
algorithms.
.next page
.begin nofill
STAN-CS-79-745
@"Upper and Lower Bounds on Time-Space Tradeoffs in a Pebble Game"
Author:  Thomas Lengauer↔(Thesis)
83 pages↔Cost:  $4.05
.end

@We derive asymptotically tight time-space tradeoffs for pebbling three
different classes of directed acyclic graphs.  Let %2N%1 be the size of
the graph, %2S%1 the number of available pebbles, and %2T%1 the time
necessary for pebbling the graph.

@(a)  A time-space tradeoff of the form 
.skip 3
is proved for pebbling (using only black pebbles) a special class of
permutation graphs that implement the bit reversal permutation.  If
we are allowed to use black and white pebbles the time-space tradeoff
is shown to be of the form
.skip 3
@(b)  A time-space tradeoff of the form
.skip 3
is proved for pebbling a class of graphs constructed by stacking 
superconcentrators in series.  This time-space tradeoff holds whether we
use only black or black and white pebbles.

@(c)  A time-space tradeoff of the form
.skip 3
is proved for pebbling general directed acyclic graphs with only black or
black and white pebbles.
.next page
.begin nofill
STAN-CS-79-746
@"ThingLab -- A Constraint-Oriented Simulation Laboratory"
Author:  Alan Borning↔(Thesis)
109 pages↔Available in microfiche only.
.end

@ThingLab is a system that provides an object-oriented environment for building
simulations.  Within this environment, part-whole and inheritance hierarchies
are used to describe the structure of a simulation, while constraints are employed
as a way of describing the relations among its parts.

@One of the principal goals of this research has been to design and implement a
language that helps the user describe complex simulations easily.  Constraints are
a particularly important tool for dealing with complexity, because they allow the
user to specify independently all the relations to be maintained, leaving it up to
the system to plan exactly how the constraints are to be satisfied.

@ThingLab is implemented in the Smalltalk-76 programming language, and runs
on a personal computer.  Among the things that have been simulated using the
system are constrained geometric objects, a bridge under load, electrical circuits,
documents with constraints on their layout and content, and a graphical
calculator.
.next page
.begin nofill
AIM-329↔(STAN-CS-79-747)
@"Using Patterns and Plans to Solve Problems and Control Search"
Author:  David Edward Wilkins↔(Thesis)
264 pages↔Cost:  $9.10
.end

@The type of reasoning done by human chess masters has not been done by
computer programs.  The purpose of this research is to investigate the
extent to which knowledge can replace and support search in selecting a
chess move and to delineate the issues involved.  This has been carried
out by constructing a program, PARADISE (PAttern Recognition Applied to
DIrecting SEarch), which finds the best move in tactically sharp middle
game positions from the games of chess masters.

@PARADISE plays chess by doing a large amount of static, knowledge-based
reasoning and constructing a small search tree (tens or hundreds of nodes)
to confirm that a particular move is best, both characteristics of human
chess masters.  A "Production-Language" has been developed for expressing
chess knowledge in the form of productions, and the knowledge base
contains about 200 productions written in this language.  The actions of
the rules post concepts in the data base while the conditions match
patterns in the chess position and data base.  The patterns are complex to
match (search may be involved).  The knowledge base was built
incrementally, relying on the system's ability to explain its reasoning
and the ease of writing and modifying productions.  The productions are
structured to provide "concepts" to reason with, methods of controlling
pattern instantiation, and means of focusing the system's attention on
relevant parts of the knowledge base.  PARADISE knows why it believes its
concepts and may reject them after more analysis.

@PARADISE uses its knowledge base to control the search, and to do a static
analysis of a new position which produces concepts and plans.  Once a plan
is formulated, it guides the tree search for several ply by focussing the
analysis at each node on a small group of productions.  Expensive static
analyses are rarely done at new positions created during the search.
Plans may be elaborated and expanded as the search proceeds.  These plans
(and the use made of them) are more sophisticated than those in previous
AI systems.  Through plans, PARADISE uses the knowledge applied during a
previous analysis to control the search for many nodes.

@By using the knowledge base to help control the search, PARADISE has
developed an efficient best-first search which uses different strategies
at the top level.  The value of each move is a range which is gradually
narrowed by doing best-first searches until it can be shown that one move
is best.  By using a global view of the search tree, information gathered
during the search, and information produced by static analyses, the
program produces enough terminations to force convergence of the search.
PARADISE does not place a depth limit on the search (or any other
artificial effort limit).  The program incorporates many cutoffs that are
not useful in less knowledge-oriented programs (e.g., restrictions are
placed on concatenation of plans, accurate forward prunes can be made on
the basis of static analysis results, a causality facility determines how
a move may affect a line already searched using patterns generated during
the previous search).
.next page
@PARADISE has found combinations as deep as 19 ply and performs well when
tested on 100 standard problems. The modifiability of the knowledge base
is excellent.  Developing a search strategy which converges and using a
large knowledge base composed of patterns of this complexity to achieve
expert performance on such a complex problem is an advance on the state of
the art.
.next page
.begin nofill
STAN-CS-79-748
@"Fast Algorithms for Solving Toeplitz System of Equations and Finding Rational 
@Hermite Interpolants"
Author:  David Y. Y. Yun
9 pages↔Cost:  $1.95
.end

@We present a new algorithm that reduces the computation for solving a Toeplitz
system to %2O(n log↑2 n)%1 and automatically resolves all degenerate cases of
the past.  Our fundamental results show that all rational Hermite interpolants,
including Pad%6'%1e approximants which is intimately related to this solution
process, can be computed fast by an Euclidean algorithm.  In this report we
bring out all these relationships with mathematical justifications and mention
important applications including decoding BCH codes.
.next page
.begin nofill
HPP-79-17↔(STAN-CS-79-749)
@"Applications-oriented AI Research:  Education"
Authors:  William J. Clancey, James S. Bennett, and Paul R. Cohen
60 pages↔Available in microfiche only.
.end

@This report is the section from the Handbook of Artificial Intelligence
dealing with research on AI applications in education, the field called
"intelligent computer-assisted instruction" or ICAI.  The Handbook is a
compendium of articles about AI ideas, techniques, and systems and is
intended for the non-AI scientist, engineer, or student.  The material
in this report reviews the historical development of ICAI research and
discusses the important design issues for these systems.  Additional
articles describing the following important ICAI systems are also
included: SCHOLAR, WHY, SOPHIE, WEST, WUMPUS, BUGGY, and EXCHECK.  The
Handbook will be published early in 1980.  We have distributed this
manuscript before publication with the hope of obtaining comments and
suggestions from the AI community.
.next page
.begin nofill
STAN-CS-79-750
@"Khachian's Algorithm for Linear Programming"
Authors:  Peter G%6'%1acs and Laszlo Lov%6'%1asz
12 pages↔Cost:  $2.10
.end

@l. G. Khachian's algorithm to check the solvability of a system of linear
inequalities with integral coeficients is described.  The running time of
the algorithm is polynomial in the number of digits of the coefficients.
It can be applied to solve linear programs in polynomial time.
.next page
.begin nofill
AIM-330↔(STAN-CS-79-751)
@"The Modal Logic of Programs"
Authors:  Zohar Manna and Amir Pnueli
36 pages↔Cost:  $2.75
.end

@We explore the general framework of Modal Logic and its applicability to program
reasoning.  We relate the basic concepts of Modal Logic to the programming
environment:  the concept of "world" corresponds to a program state, and the
concept of "accessibility relation" corrsponds to the relation of derivability
between states during execution.  Thus we adopt the Temporal interpretation
of Modal Logic.  The variety of progrmm properties expressible within the modal
formalism is demonstrated.

@The first axiomatic system studied, the %2sometime system%1, is adequate for
proving total correctness and 'eventuality' properties.  However, it is inadequate
for proving invariance properties.  The stronger %2nexttime system %1obtained by
adding the %2next%1 operator is shown to be adequate for invariances as well.
.next page
.begin nofill
STAN-CS-79-752
@"Projected Lagrangian Algorithms for Nonlinear Minimax and %2l%1↓1 Optimization"
Author:  Michael Lockhart Overton↔(Thesis)
164 pages↔Available in microfiche only.
.end

@This thesis presents new algorithms for solving the nonlinear minimax and
%2l%1↓1 optimization problems.  The minimax problem often arises in circuit
design and game theory, while solving the %2l%1↓1 problem is useful in the
robust analysis of experimental data.  Both problems occur in wide varieties
of discrete data approximation problems in the %2l↓∞%1 and %2l%1↓1 norms.
Both the minimax and %2l%1↓1 problems are unconstrained optimization problems
whose objective functions are not differentiable everywhere, and hence cannot
be solved efficiently by standard techniques for unconstrained optimization.

@It is well known that the minimax problem can be transformed into a nonlinearly
constrained optimization problem with one extra variable, where the objective
and constraint functions are continuously differentiable.  This equivalent
problem has special properties which are ignored if solved by a general-purpose 
constrained optimization method.  The algorithm we present exploits the special
structure of the equivalent problem.  A direction of search is obtained at
each iteration of the algorithm by solving an equality-constrained quadratic
programming problem, related to one a projected Lagrangian method might use
to solve an equivalent constrained optimization problem.  Special Lagrange
multiplier estimates are used to form an approximation to the Hessian of the
Lagrangian function, which appears in the quadratic program.  Analytical
Hessians, finite-differencing or quasi-Newton updating may be used in the
approximation of this matrix.  The resulting direction of search is guaranteed
to be a descent direction for the minimax objective function.  Under mild
conditions the algorithms are locally quadratically convergent if analytical
Hessians are used.

@The %2l%1↓1 problem can also be transformed into a nonlinearly constrained
optimization problem, but it involes many extra variables.  We show, however,
how to construct a method closely related to that for the minimax problem which
requires successively solving quadratic programs in the same number of variables
as that of the original problem.  Thus the %2l%1↓1 problem is as tractable as
the minimax problem by the projected Lagrangian technique.

@Because the minimax and %2l%1↓1 objective functions are not necessarily
differentiable along the directions of search, it is important to use special
line search algorithms to obtain reductions in these functions at each
iteration.  Several flexible steplength algorithms are presented, along with
algorithms for univariate minimization of the minimax, %2l%1↓1 and related
objective functions.

@The usefulness of the methods is illustrated by presenting the results of
applying them to a number of problems from the literature.
.next page
.begin nofill
STAN-CS-79-753
@"Should Tables Be Sorted?"
Author:  Andrew Chi-Chih Yao
36 pages↔Cost:  $2.75
.end

@We examine optimality questions in the following information retrieval
problem:  Given a set %2S%1 of %2n%1 keys, store them so that queries of
the form "Is %2x ε S%1?" can be answered quickly.  It is shown that, in
a rather general model including all the comonly-used schemes,
%3i%21g(n+1)%3j%1 probes to the table are needed in the worst case,
provided the key space is sufficiently large.  The effects of smaller
key space and arbitrary encoding are also explored.
.next page
.begin nofill
HPP-79-21↔(STAN-CS-79-754)
@"Natural Language Understanding"
Authors:  Anne Gardner, James Davidson, and Terry Winograd
100 pages↔Available in microfiche only.
.end

@This report is the section on Natural Language research from the
Handbook of Artificial Intelligence.  The Handbook is a compendium of
articles about AI ideas, techniques, and systems intended for non-AI
scientists, engineers, and students.  The material in this report
reviews the development of ideas for computer programs that understand
natural language, briefly sketches the main ideas of several types of
grammars and parsing strategies, and describes, among others, the
following important NL systems: LUNAR, SHRDLU, MARGIE, SAM, PAM, and
LIFER.  The Handbook will be published early in 1980.  We have
distributed this manuscript before publication with the hope of
obtaining comments and suggestions from the AI community.
.next page
.begin nofill
AIM-331↔(STAN-CS-79-755)
@"Efficiency Consideration in Program Synthesis:  A Knowledge-Based Approach"
Author:  Elaine Kant↔(Thesis)
160 pages↔Cost:  $6.20
.end

@This thesis presents a framework for using efficiency knowledge to guide
program synthesis when stepwise refinement is the primary synthesis
technique.  The practicality of the framework is demonstrated via the
implementation of a particular search-control system called LIBRA.  The
framework is also one model for how people write and analyze programs.

@LIBRA complements an interactive program-synthesis project by adding
efficiency knowledge to the basic coding knowledge and by automating the
implementation-selection process.  The program specifications include size
and frequency notes, the performance measure to be minimized, and some
limits on the time and space of the synthesis task itself.  LIBRA selects
algorithms and data representations and decides whether to use
"optimizing" transformations by applying knowledge about time and storage
costs of program components.  Cost estimates are obtained by incremental,
algebraic program analysis.

@Some of the most interesting aspects of the system are
explicit rules about
plausible implementations, constraint setting, multiple-exit-loop
analysis, the comparison of optimistic and achievable cost estimates to
identify important decisions, and resource allocation on the basis of
importance.  LIBRA has automatically guided the construction of programs
that classify, retrieve information, sort, and compute prime numbers.
.next page
.begin nofill
HPP-79-22↔(STAN-CS-79-756)
@"Applications-oriented AI Research:  Science and Mathematics"
Authors:  James S. Bennett, Bruce G. Buchanan, Paul R. Cohen, and Fritz Fisher
110 pages↔Available in microfiche only.
.end

@This report includes material from the Handbook of Artificial
Intelligence covering AI research aimed at applications in various
domains.  The Handbook is a compendium of articles about AI ideas,
techniques, and systems intended for non-AI scientists, engineers, and
students.  The material in this report covers research in applying AI
techniques in systems to be used by chemists, mathematicians, and other
scientists and includes descriptions of programs like TEIRESIAS, AM,
MACSYMA, PROSPECTOR, DENDRAL, CONGEN, Meta-DENDRAL, and CRYSALIS.  The
Handbook will be published early in 1980.  We have distributed this
manuscript before publication with the hope of obtaining comments and
suggestions from the AI community.
.next page
.begin nofill
HPP-79-23↔(STAN-CS-79-757)
@"Applications-oriented AI Research:  Medicine"
Authors:  Victor B. Ciesielski, James S. Bennett, and Paul R. Cohen
40 pages↔Available in microfiche only.
.end

@This report includes material from the Handbook of Artificial
Intelligence describing research on applying AI techniques to medical
systems.  The Handbook is a compendium of articles about AI ideas,
techniques, and systems intended for non-AI scientists, engineers, and
students.  The articles in this report give an overview of research in
AI applications in medicine and describe the following important medical
programs: MYCIN, CASNET, INTERNIST, PIP, Digitalis Advisor, IRIS, and
TEIRESIAS.  The Handbook will be published early in 1980.  We have
distributed this manuscript before publication with the hope of
obtaining comments and suggestions from the AI community.
.next page
.begin nofill
HPP-79-24↔(STAN-CS-79-758)
@"Automatic Programming"
Authors:  Robert Elschlager and Jorge Phillips
100 pages↔Available in microfiche only.
.end

@This report is the section from the Handbook of Artificial Intelligence
on Automatic Programming (AP) research.  The Handbook is compendium of
articles about AI ideas, techniques, and systems intended for non-AI
scientists, engineers, and students.  The material in this report
surveys AP methodologies and program specification techniques and
describes in some detail the following AP systems: PSI, SAFE,
Programmer's Apprentice, PECOS, DAEDALUS, PROTOSYSTEM-I, NLPQ, and
LIBRA.  The Handbook will be published early in 1980.  We have
distributed this manuscript before publication with the hope of
obtaining comments and suggestions from the AI community.
.next page
.begin nofill
HPP-79-25↔(STAN-CS-79-759)
@"Schema-shift Strategies for Understanding Structured Texts in Natural Language"
Author:  Alain Bonnet
40 pages↔Available in microfiche only.
.end

@This report presents BAOBAB-2, a computer  program built upon MYCIN
(ref Shortliffe74) that is used for understanding medical summaries
describing  the status  of patients. Due both to the conventional way 
physicians present medical problems in these summaries and
the  constrained nature  of medical jargon,  these texts  have a very
strong structure.  BAOBAB-2 takes  advantage of  this  structure  by
using a model of this  organization as a set of related schemas that
facilitate  the interpretation  of  these  texts. Structures  of  the
schemas  and their relation  to the surface  structure are described.
Issues relating to selection and  use of these schemas by the program
during interpretation of the summaries are discussed.
.next page
. <<SOL reports>>
.once center
SYSTEMS OPTIMIZATION LABORATORY
.skip
@The following reports are available from the Systems Optimization Laboratory.
SOL Reports can be obtained by writing to:  Gail L. Stein, Systems Optimization
Laboratory, Operations Research Department, Stanford University, Stanford,
California 94305.
.skip 2
.begin nofill
Technical Report SOL 79-5
@"Sparse Least Squares by Conjugate Gradients: A Comparison of Preconditioning 
@Methods"
Author:  Michael A. Saunders
.end

@The conjugate-gradient method is ideal for sparse least squares because of its
simplicity and low storage requirements, and because it can be relied upon to
converge to a satisfactory solution.  However, the number of iterations required
can be unacceptably high if the design matrix X is ill-conditioned.

@In this context, preconditioning means a change of variables aimed at
accelerating convergence.  For some nonsingular C, the usual linear model
%2y = %1X%5b%2 + e%1 is transformed into %2y = %1X%5b%2 + e%1,
where X = XC∩[-1].  (The product is not computed explicitly.)
If X is better conditioned than X, the transformed problem should
require fewer iterations.

@Here we give come computational results for two particular choices of C.
The SSOR approach of Bj%6:%1orck and Elfving promises to give useful savings
generally.  When X is very ill-conditioned, it appears that an LU factorization
of X (with C = U) can lead to even faster convergence.