perm filename LISP.MTG[TIM,LSP] blob sn#577516 filedate 1981-04-07 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00023 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00004 00002	∂12-Mar-81  1418	ENGELMORE at USC-ISI 	Lisp Meeting Announcement   
C00015 00003	∂25-Mar-81  1423	ENGELMORE at USC-ISI 	Status reports for lisp meeting  
C00019 00004	∂29-Mar-81  2016	ENGELMORE at USC-ISI 	Status report -- FranzLisp  
C00027 00005	∂30-Mar-81  1439	HEDRICK at RUTGERS 	status report  
C00041 00006	∂30-Mar-81  1439	Dick Gabriel <RPG at SU-AI>   
C00054 00007	∂30-Mar-81  1439	Dick Gabriel <RPG at SU-AI>   
C00061 00008	∂31-Mar-81  0919	Moon at MIT-MC (David A. Moon) 	Status report for Lisp Machine Lisp   
C00069 00009	∂01-Apr-81  1125	Scott.Fahlman at CMU-10A 	Spice Lisp Status  
C00109 00010	∂01-Apr-81  1346	ENGELMORE at USC-ISI 	Agenda for Lisp Meeting
C00116 00011	∂01-Apr-81  2017	ENGELMORE at USC-ISI 	Lisp meeting: bring cash    
C00119 00012	∂02-Apr-81  0749	ENGELMORE at USC-ISI 	Lisp meeting reports   
C00207 00013	∂02-Apr-81  1617	CLR at MIT-XX 	Status of MDL Project    
C00217 00014	∂02-Apr-81  1618	Feldman@SUMEX-AIM 	Lisp PLITS status report  
C00225 00015	∂02-Apr-81  1617	BALZER at USC-ISIB 	INTERLISP-VAX STATUS REPORT   
C00230 00016	∂03-Apr-81  0554	SHEIL at PARC-MAXC 	Status report on Interlisp-D  
C00245 00017	∂03-Apr-81  1205	Griss at UTAH-20 (Martin.Griss) 	Standard LISP Report  
C00281 00018	∂03-Apr-81  1205	CSVAX.fateman at Berkeley 	Comments on your original call   
C00309 00019	∂04-Apr-81  2212	JONL at MIT-MC (Jon L White)  
C00347 00020	∂06-Apr-81  1220	YONKE at BBND 	Interlisp-Jericho Status Report    
C00356 00021	∂02-Apr-81  1744	BALZER at USC-ISIB 	INFORMAL INTERLISP MEETING    
C00361 00022	∂06-Apr-81  2304	Barstow@SUMEX-AIM 	Future LISP Environments  
C00366 00023	∂07-Apr-81  0026	ENGELMORE at USC-ISI 	Status reports, etc.   
C00424 ENDMK
C⊗;
∂12-Mar-81  1418	ENGELMORE at USC-ISI 	Lisp Meeting Announcement   
Date: 12 Mar 1981 1359-PST
Sender: ENGELMORE at USC-ISI
Subject: Lisp Meeting Announcement
From: ENGELMORE at USC-ISI
To: Kahn, Adams, Cerf, Druffel, 
To: RBrachman at BBND, Yonke at BBN, 
To: Fahlman at CMU-10B, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JonL at MIT-MC, Moon at MIT-MC, RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, RPG at SU-AI
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: RWW at SU-AI, Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC
Message-ID: <[USC-ISI]12-Mar-81 13:59:50.ENGELMORE>

       Call for a Discussion of Lisp Options

       IPTO recognizes both the critical need of our research community
       for modern computer resources and a responsibility to provide the
       resources necessary to maintain a high quality of research.  This
       message focusses on the AI community, most of which uses Lisp as
       its primary programming language.  Our current effort to meet the
       need for more computing power (both CPU cycles and address space)
       is confounded by the current multitude of options facing us in both
       hardware and software.  Our budget, of course, is finite, and
       necessitates our choosing the best possible investment strategy.
       In order to formulate that strategy and a management plan to
       implement it, we need to discuss the options with you.

       My primary concern here is not hardware, but software.  The
       long-term hardware issues will be dealt with once the software
       question is resolved, but some discussion of hardware is relevant
       (see below).  There are now several respectable Lisp dialects in
       use, and others under development.  The efficiency,
       transportability and programming environment varies significantly
       from one to the other.  Although this pluralism will probably
       continue indefinitely, perhaps we can identify a single "community
       standard" that can be maintained, documented and distributed in a
       professional way, as was done with Interlisp for many years.

       Here are some of the issues that need to be sorted out:

          - Language Development:  There are now a very large set of
            Lisp dialects and sub dialects -- Interlisp, MacLisp,
            CADR-Lisp, Spice-Lisp, Franz-Lisp, NIL, UCI-Lisp,
            "Standard Lisp", MDL, etc.  What are their relative
            merits and significant differences?  Is there an
            opportunity to combine any of them as variants of a
            common base, supported by a single implementation?  How
            much compatibility is needed between dialects?

          - Programming environments: There are two main Lisp
            programming environments:  Interlisp and Maclisp. These
            environments comprise a set of useful functions, a set of
            conventions and a philosophy of programming.  How
            independent are these features from their respective
            language dialects?  Can both environments be supported
            within a single system? Where lies the future with
            respect to networking, or to utilizing the capabilities
            of displays?

          - Portability: Should we be investing more vigorously in
            the development of a highly portable programming language
            (and environment) so we can be less concerned about
            hardware choices?  What work needs to be done to minimize
            the effort of transporting Lisp to the many
            microprogrammable personal machines that are appearing
            (or will soon appear) on the market?

          - Other issues: Although this meeting is about software,
            there are some machine-specific concerns that we can't
            ignore.  For example, the Vaxen are and will probably
            continue to be a very widely used line of machines.
            What's the future of Lisp for these machines?  More
            specifically, what are the pros and cons of Franz Lisp as
            a near term solution to running Lisp programs on Vaxen?
            If the Vax Interlisp and/or NIL effort fail to produce a
            useful product, how big an effort would it be for their
            users to translate their programs to Franz Lisp?  How
            essential is the use of microcode on the Vax for
            efficient Lisp execution?  How should the Lisp executions
            change for use on a single user Vax?  What about
            exploiting the the large address space under TOPS-20 as a
            near-term alternative for Interlisp or other Lisp
            dialects?

       I would like to propose a panel of users, implementers and IPTO
       program managers to address these issues with the objective of
       developing a plan for future Lisp development, maintenance and
       support.  The two main items on the agenda are 1) examining the
       alternatives, and 2) formulating a plan of attack.

       This meeting needs to be held soon, preferably within the next 4 to
       6 weeks.  Unless there are strong objections, I would like to
       schedule the meeting on Wednesday, April 8, at SRI (Gary Hendrix
       has kindly agreed to be the host).  I think we can complete the
       discussion in one day, but it may require an evening session as
       well.

       Please let me know as soon as possible if you can be there.  I
       think this will be a meeting that's well worth attending, and I
       hope all the recipients of this message can participate.  If you
       can't make it, feel free to suggest an alternate attendee.

       Distribution list:
       DARPA: Kahn,Adams,Cerf,Druffel
       BBN: Brachman,Yonke
       CMU: Fahlman
       ISI: Balzer,Crocker
       MIT: JonL White,Greenblatt,Moon,Reeve,Vezza
       Rand: Hearn,Sowizrel
       Rutgers: Hedrick
       SRI: Hendrix,Shostak
       Stanford: Weyhrauch,Genesereth,VanMelle,Gabriel
       UCB: Fateman
       Utah: Griss
       Xerox PARC: Deutsch,Masinter
       Yale: Riesbeck,McDermott
       Bell: Ginsparg
∂25-Mar-81  1423	ENGELMORE at USC-ISI 	Status reports for lisp meeting  
Date: 25 Mar 1981 1420-PST
Sender: ENGELMORE at USC-ISI
Subject: Status reports for lisp meeting
From: ENGELMORE at USC-ISI
To: Kahn, Adams, Cerf, Druffel, 
To: Yonke at BBN, Zdybel at BBN, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Feldman at SUMEX-AIM, Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI]25-Mar-81 14:20:36.ENGELMORE>





I would like those of you who are currently engaged in some form of LISP

implementation effort to prepare, in advance of the April 8th meeting, a

short  status  report.    The  report should be completed before Friday,

April 3rd, and  sent  to  me.    I'll  distribute  the  reports  to  the

attendees.   We can then save time at the meeting itself by limiting the

discussion to comments or clarifications on each of  your  efforts,  and

avoid the show-and-tell parade.


Please include the 
following in your status report:



   1. Describe your project.


   2. What  are the distinguishing features of your language and/or

      programming environment?


   3. Is your system operational? If yes, on what hardware?  If no,

      when do you expect to be operational, and on what?


   4. What are your present plans for further development?  Include

      estimated milestone dates, if possible.



The  following list of current projects is not exhaustive, but should be

a proper 
subset of the reports I'd like to have available:



   1. CADR-Lisp, including environment (Greenblatt and Moon)


   2. Spice-Lisp (Steele)


   3. Franz-Lisp (Fateman)


   4. Standard Lisp (Griss)


   5. UCI-Lisp on extended TOPS-20 (Hedrick)


   6. VAX Interlisp (Balzer)


   7. NIL (White)


   8. MDL (Reeve)



I appreciate your cooperation, and look forward to your participation at

the meeting.


Bob

∂29-Mar-81  2016	ENGELMORE at USC-ISI 	Status report -- FranzLisp  
Date: 29 Mar 1981 1958-PST
Sender: ENGELMORE at USC-ISI
Subject: Status report -- FranzLisp
Subject: [CSVAX.fateman at Berkeley: Franz info]
From: ENGELMORE at USC-ISI
To: Kahn, Adams, 
To: Yonke at BBN, Zdybel at BBN, 
To: Wilson at CCA, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI]29-Mar-81 19:58:12.ENGELMORE>

As status reports come in between now and the day of the Lisp meeting I will
distribute them to you via netmail.  Here's Richard Fateman's report on
Franz Lisp.  I hope it will raise some questions, but more importantly I
hope it will inspire others to send in their contributions.    -rse

	
Begin forwarded message
Mail-From: ARPANET host BERKELEY rcvd at 27-Mar-81 1416-PST
Date: 27 Mar 1981 13:47:26-PST
From: CSVAX.fateman at Berkeley
To: engelmore@usc-isi
Cc: CSVAX.fateman@Berkeley
Subject: Franz info



   1. Describe your project.

Franz Lisp is a Maclisp-like lisp system that was written at UC Berkeley
primarily to support the Macsyma algebraic manipulation system on large-address
space machines, and specifically the VAX in the UNIX environment.

   2. What  are the distinguishing features of your language and/or
      programming environment?

Franz is written in C, with the exception of a few pages of assembler
for arbitrary-precision integers (bignums).

Franz has 32 bit pointers (on the VAX); arrays; hunks; bignums; double-prec.
float (no single);  a "bibop" allocation scheme is used, with statically
allocated type tables.  Conventional mark and sweep garbage collection
is used; 

Franz has a clean interface to Fortran 77, C, Pascal, and other language
systems which conform to the usual UNIX call conventions.

UNIX program profiling tools, etc. are operational with Franz.

No spaghetti stack; only primitive funarg handling.

Runs about 3-6 times slower than a KL-10  (mit-mc's) on a VAX 11/780.
Although figures are hard to compare, it appears that the 11/780
is substantially faster than a CADR running Macsyma.

Run time environment is close to maclisp (thus, less elaborate than
Interlisp).

There is a compiler, "Liszt"  which (under flag control)
understands many features of Maclisp, UCI lisp, and Interlisp.  Thus
the compiler
provides a mapping from other lisps to "Franz" so that files with different
host dialects can be mixed, subject to name conflict problems.

Franz should be easily transportable to other systems with sufficiently
powerful system facilities, and a C compiler.  Each such transport would
generally require a rewriting of Liszt's code generator.


   3. Is your system operational? If yes, on what hardware?  If no,
      when do you expect to be operational, and on what?

Franz has been running under the VAX/UNIX environment since October, 1978.
It was moved to VAX/VMS in about 3 weeks (April, 1980).
It has been distributed to about 80 VAX/UNIX systems and some unknown
number (>5) VMS systems. It has been running at some non-Berkeley sites
since January, 1979.
It runs unchanged on non-780 VAX systems.

We are not restricting distribution of the source and intend 
to provide distribution of enhancements given to us without restriction.


   4. What are your present plans for further development?  Include
      estimated milestone dates, if possible.

There is no further major development of Franz necessary for our immediate
goals in building an integrated scientific environment, although various
activities concerned with tuning, fixing bugs, etc, are still being
funded at a low level by the Dept. of Energy Applied Math Sciences program.
No major changes have been made in at least a year to the VAX code.

Because of interest in personal computers, we may be transporting Franz 
to MC68000 UNIX;
we may also set up a system for IBM style computers.

          --------------------
End forwarded message
		

∂30-Mar-81  1439	HEDRICK at RUTGERS 	status report  
Date: 30 Mar 1981 1248-EST
From: HEDRICK at RUTGERS
Subject: status report
To: ENGELMORE at USC-ISI
cc: amarel at RUTGERS, josh at RUTGERS
In-Reply-To: Your message of 12-Mar-81 1659-EST
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date: 30 Mar 1981

1. Describe your project.

Elisp is a complete rewrite of the assembly-language portion of Rutgers/UCI
Lisp, to allow it to run in extended (30-bit) addressing mode under Tops-20.
The design goal is to be as compatible as possible with the original R/UCI
Lisp, but to allow changes in low-level representations where these seem to
make sense.  The technology is radically different from that underlying
R/UCI Lisp:  type-coded pointers and a copying GC. I began the project
at the beginning of February, 1981, and expect to put about 3 man-months
into it.  (One of the reasons that R/UCI Lisp is a good choice is that it
should be possible to transport it within this amount of time.)  I am trying
to do as much of the work in Lisp as possible.  The rest is in assembly
language, but with heavy use of macros that should be somewhat
machine-independent.  I have not gone as far as Standard Lisp is minimizing
the assembly language portion, since I do not have their incentive to
transport with minimum cost.  I conjecture that the final version could
be tranported to a suitable machine (if there is another one) in a man-month.
[The reason that I have some doubts about whether there is another one is
that the use of type-coded pointers may depend upon the fact that indexed
references on the 20 ignore the high-order 6 bits, which I use for the type
code.]

I should point out that Lisp code and data will approximately double in
size when moved from R/UCI Lisp to Elisp.  This is because two words must
be used for a CONS cell.

2. What  are the distinguishing features of your language and/or
programming environment?

These should be the same as R/UCI Lisp.  For those who are not familiar
with it, R/UCI Lisp is a descendant of Stanford's Lisp 1.6, which appears
to have been derived from some early Lisp written at MIT.  (Both the 
interpreter and compiler appear to have pieces of code that are identical
to MAClisp.)  It is a fairly classical shallow-binding Lisp.  It has
a reasonable collection of Lisp functions, but few of the sophisticated
packages in Interlisp.  In particular, we are philosophically opposed to
compiler optimizations that require declarations or introduce 
incompatibilities between the interpreter and compiler.  [We have inherited
the SPECIAL variable problem, which we may solve by making SPECIAL the
default and having LOCAL declarations.]  UCI Lisp differs from 1.6 largely
in that some of the folks who had worked on Interlisp transported the Lisp
editor and break package from Interlisp to it, and added a few other
facilities.  R/UCI Lisp contains a few additional functions, none of which,
in my opinion, make much difference.  I think most of our users now believe
that a structure editor is a mistake.  We will continue to support the
UCI/Interlisp editor, but will add an interface to EMACS.  I expect that
most of our users will use the latter.  We will probably add vectors,
i.e. contiguous words of garbage-collected memory.

3. Is your system operational? If yes, on what hardware?  If no,
when do you expect to be operational, and on what?

I now have a system that is roughly equivalent to Lisp 1.6.  I estimate
that this required somewhere around 1 man-month.  (That is, I spent
half-time on it for 2 months.  Since I work 16 hours a day, it may be that
this should be counted as 2 man-months.)  One of my staff is working on
the compiler.  This will be an adapted version of the Standard Lisp
compiler.  I hope to have it running by the conference.  I estimate
that bringing it up to the equivalent of UCI Lisp will take another
1 to 2 man-months.  I hope to have a system equivalent to UCI Lisp, with
compiler, by the end of this summer.

This system requires Tops-20 running on a KL-10 model B (or Jupiter).
The requirement of a model B means that some early Tops-20 systems will
not be able to run it.  It appears from the documentation that the KS-10
(2020) will not support extended addressing.  Casual questions to Foonly
indicated that they think they can implement the required capability if
requested to. (However this would require serious development work on
Tenex in order to be useable.)  The operating system must be Tops-20
version 4 or later.  Version 4 requires a one-word patch to enable
extended addressing.  There are two monitor bugs in version 4 that will
have to be corrected before Elisp can go into widespread use.  DEC is
supposedly working on them.  We believe that version 5 will support it
without change, but version 5 is some time off (it will not come out
until the Jupiter is released).   In principle Tops-10 could be made to
support extended addressing, but there is no evidence that this will be
done.  It would require a complete redesign of the paging structure and
a serious coding effort distributed throughout the monitor.

Version 4 of the operating system supports only 23 bits of address.
I believe this is enough for practical purposes, and that you are likely
to meet serious performance limits (due to increasing page rates) 
before getting to that point.  We are considering limiting addressing 
to 29 bits to speed up type-checking in CAR and CDR.  Please note that
addresses here refer to words, and there are two words per CONS cell.

4. What are your present plans for further development?  Include
estimated milestone dates, if possible.

Once the system has been brought to the level of R/UCI Lisp, I have no
particular plans for development.  We have a full-time staff member
assigned to Lisp support work, and he will no doubt do things from
time to time, but probably nothing dramatic.  It might be amusing to
try to transport it to the VAX or some suitable micro, but this will
be driven by our users' needs.


Incidentally, the interpreter appears to run slightly faster than the
R/UCI interpreter.  Several space-time tradeoffs have been made in favor
of time, so this is not a surprise.  There are some performance problems
with extended addressing, due to pager refills caused by the limitations
of the paging structure.  The first draft of Elisp ran a factor of 5 to
10 slower than R/UCI Lisp due to these.  Careful placement of data got
it down so that Elisp ran about 1.5 to 2 times slower (and a factor of 6
faster than Frantz Lisp on a test using doubly-recursive Fibonacci).  My
guess is that 1.5 to 2 is the actual measure of the slowdown due to the
pager problems.  The fact that it is now running a bit faster than R/UCI
Lisp probably means that if these problems were solved it would run
almost a factor of 2 faster.  This is for the interpreter.  It is likely
that compiled code will show the full 1.5 to 2 slowdown, although even
there we may be able to match R/UCI Lisp.  We will be using direct
function calling, instead of the LUUO mechanism that routes all calls
through the interpreter. We believe that DEC will be working with us on
the performance problems, but this is not yet final.  I believe that
with a little attention to the microcode, extended code can be made 1.1
to 1.2 times slower than normal code.
-------

∂30-Mar-81  1439	Dick Gabriel <RPG at SU-AI>   
Date: 30 Mar 1981 1214-PST
From: Dick Gabriel <RPG at SU-AI>
To:   engelmore at USC-ISI  
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date: 30 Mar 1981

			      S-1 NIL Project

			    Richard P. Gabriel
			    Guy L. Steele, Jr.
			     Rodney A. Brooks

Overview

The goal of the S-1 NIL (New Implementation of LISP) project is to produce
a working LISP system for the S-1 which provides a programming system and
environment as good or better than that provided by MacLISP.  The NIL
language is primarily a superset of MacLISP.  A few changes were made to
the language in the light of over ten years' experience with the MacLISP
system on five different operating systems; the most notable of these is
the institutionalization of the SPECIAL/local variable distinction into
the language (as opposed to the previous practice of letting them be
compiler declarations, which was a source of bugs because interpreted and
compiled code might behave differently).  Others include the introduction
of closures (as in SCHEME); vector, character string, and bit string data
types; a more powerful procedure-calling interface; and a more general I/O
system. {Refer to the VAX NIL report for details on the language.}

An important goal of the NIL project is to avoid machine-dependent or
operating-system-dependent special features, to make it easier to move NIL
programs from one machine to another.  A parallel implementation of NIL is
being undertaken for the VAX at MIT.  Most of NIL itself is to be written
in NIL, so that the S-1, VAX, and other implementations can share most of
the code involved. 

Subject to this goal, NIL is also intended to be as compatible as possible
with MacLISP, except for a few carefully weighed decisions to be different
as a result of MacLISP experience.  Subject in turn is the goal of being
as compatible as possible with the LISP Machine language developed by the
LISP Machine Group at MIT.

The decision was made to implement NIL only for the S-1 Mark II machine.
The Mark I's architecture was sufficiently different, especially with
regard to pointer formats, that it would have been extremely difficult to
accomodate both architectures with a single compiler and assembler.

S-1 Implementation

The implementation strategy was planned at the beginning of the summer of
1979 as follows.  A compiler, assembler, and cold loader would be written
in the intersection of NIL and MacLISP.  These could therefore be run on
both the PDP-10 under MacLISP at first, and later run on the S-1 under
NIL.  The compiler would take NIL code and produce symbolic assembly
language output (in a LISP S-expression format); the assembler would take
this output and produce a binary file; and the cold loader would take the
binary files, link them into a minimal initial run-time environment, and
produce LDI files loadable by the standard S-1 binary loader (the same one
used to load PASCAL programs).

Once this much was done, then the LISP run-time loader (FASLOAD) could be
written in NIL, and then compiled, assembled, and cold loaded on the
PDP-10.  The resulting LDI file could then be loaded into the S-1
simulator, or the S-1 Mark II when it is built.  With the run-time
loader running on the S-1 then any LISP binary file could be loaded.  At
that point any NIL program could be compiled and assembled on the PDP-10
and loaded into the S-1.  The next step would be to so compile and
assemble the compiler and assembler, thereby bootstrapping them into the
S-1.  Then the NIL interpreter, full I/O system, and whatever else could
be compiled and assembled on either machine and loaded into the S-1.

In this way almost no S-1 machine code would have to be written by hand.
Most of the run-time system would in fact be written in NIL.  For example,
the function "+" would be written "in terms of itself" as follows:

(DEFUN + (X Y) (+ X Y))

The reason this works is that the when the compiler compiles this
definition, it recognizes "+" as a primitive operator that it can
open-code, and so the result is a standard LISP procedure which does not
call itself but which simply does the addition.  (The definition is needed
at all for the benefit of the interpreter, which does not know about
open-coded primitives, but must use standard procedure objects.)

With the core of the NIL system bootstrapped, then further versions of the
software could be compiled on the S-1 itself, and additional system
software written, such as an editor, debugging system, and so on.

Project Status

At this point the compiler, assembler, and cold loader have been written
and individually tested on a few cases.  The output of the assembler has
been cold loaded and executed on the S-1 simulator and demonstrated to run
correctly.

Summer 1981

The plan of action at this point is to gather together the original
implementors and to spend the entire summer of 1981 getting the S-1 NIL
operational. At this time, since there has been considerable progress made
on the VAX NIL, an extensive amount of code can be borrowed from MIT.  We
hope to enlist several graduate students in order to complete this project
by October 1981. Completion of this project should impact favorably on the
VAX NIL implementation.

S-1 Overview

A uniprocessor S-1 Mark II is roughly Cray-1 equivalent in
performance, with a peak scalar rate of 20 mips, a peak vector
rate of 80 mflops and a peak signal processing rate of 400 mflops.
Even though the Mark IIA is highly optimized for ``number crunching'',
it is very much a general purpose machine. The S-1 architecture
has a very large virtual address space (2 billion bytes), which is
both segmented and paged. In addition, every address has 5 tag bits,
which S-1 NIL can use for data tagging.

A typical Mark IIA system will have a large amount physical memory.
Initial systems will have at least thirty-two million bytes.
The S-1 operating system, Amber, very effectively supports the use
of such large memories. Amber is a full-functionality operating
system based (loosely) on the ideas in Multics.

A Mark IIA is capable is executing a (fairly complex) instruction
every 50 nsec. To achieve this it uses a pipeline - which means
that sequences which involve many dependencies will not run
at the full rate. For instance, a CX...XR sequence will run
at 200 nsec per X if the pipeline is not taken into account.
However a Mark IIA can do up to four such sequences interleaved
at the same speed as one. Thus a good Lisp compiler might approach
the 20 MIP peak scalar rate, though probably 10 MIPs is expected.

The Mark IIA processor cabinet will fit through a standard door
and has a mass production cost of under $500K.


∂30-Mar-81  1439	Dick Gabriel <RPG at SU-AI>   
Date: 30 Mar 1981 1214-PST
From: Dick Gabriel <RPG at SU-AI>
To:   engelmore at USC-ISI  
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date: 30 Mar 1981

		       LISP Timing Progress Report
			       R.P. Gabriel

The LISP Timing Evaluation project began at the end of February and
is slowly taking shape. Originally simply a timing project, other
evaluations are being considered. Since there are a large number of
systems and people involved in the project there has been nothing
in the way of results at the moment, it being in the organizational
stage yet.

The idea of the project is to provide both objective and subjective
bases for rational choice among the various LISPs and LISP systems
available today or to be available in the near future. The objective
measures are to be provided through the use of benchmarks which will
be run on the various systems in the test with measurements made
in terms of CPU time. These benchmarks will be/are being provided
by people at the various sites in order to provide a range of
interesting benchmarks, not merely a few artificial ones. 

The subjective measures are descriptions of the systems provided
by the volunteers along with experiences with the translating the
various benchmarks: since the benchmarks are not restricted in any
way a translation phase for each site is required. The tools and
problems associated with each translation (with the original and
translated programs as evidence) can be interpreted as a measure
of the expressive power of a given LISP/LISP system.

A final measure of the non-language efficiency will be attempted,
though there will be some technical problems here. What is meant is
the garbage collection and system paging overhead time. 

The following is a list of the systems to be tested as known at this point:

Interlisp on MAX, Dolphin, Dorado
MacLisp on SAIL
NIL on S-1
InterLisp on SUMEX
UCILISP on Rutgers
SpiceLisp on PERQ
Lisp Machine (Symbolics, CADR)
Maclisp on AI, MC, NIL on VAX
InterLisp on F2
Standard Lisp on TOPS-10, B-1700 
LISP370
TLC-lisp on Z-80
muLisp on Z-80
Muddle on DMS
Rutgers Lisp
Multics MacLisp
Jericho InterLisp
Cromemco Lisp on Z80
Franz Lisp on VAX UNIX
UTILISP

At this point only about 5 benchmarks fave been proposed, and I fear that
I will need to propose a number of them, though I hope to get the volunteers
to code them up for me. At present I hope to have the following types of benchmarks:

	Array reference and storage (random access)
	Array reference and storage (matrix access)
	Array reference and storage (matrix inversion)
	Short list structure (records, hunks...)
	Long list structure (cdr access)
	CAR heavy structures
	CDR heavy structures
	Interpreted function calls
	Compiled function calls
	Smashed function calls
	Table function calls (FUNCALL, SUBRCALL)
	Tail recursion (?)
	Block compiling
	Reader speed
	Property list structures
	Atom structures (saturated obarrays)
	Internal loops
	Trigonometric functions
	Arithmetic (floating and fixed)
	Special variable lookup
	Local variable lookup
	CONS time
	GC time
	Compiled code load time
	EQ test time
	Arithmetic predicates
	Type determination

The time scale, given the current voluntary nature and the broad extent
of the project is approximately 1 year (March 1982) for the final report
with most of the benchmarks timed by September 1981.


∂31-Mar-81  0919	Moon at MIT-MC (David A. Moon) 	Status report for Lisp Machine Lisp   
Date: 30 MAR 1981 2044-EST
From: Moon at MIT-MC (David A. Moon)
Subject: Status report for Lisp Machine Lisp
To: ENGELMORE at USC-ISI
CC: rg at MIT-AI
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date: 31 Mar 1981

       1. Describe your project.

Lisp Machine Lisp has these goals:

To be the best Lisp in existence, absorbing the good ideas of all
other Lisps and adding many of its own.

To support highly-interactive personal computation, using a dedicated
computer for each active user.  This includes taking effective advantage
of raster graphics.

To be upward-compatible from Maclisp.  At the same time, to fix the serious
experienced problems with Maclisp and other Lisps of the previous
generation: small address space, lack of run-time error checking in
compiled code, lack of good programming tools, need for extensive
declarations and a complicated compiler to achieve maximal efficiency, low
performance due to time-sharing, lack of maintainability and extensibility
due to assembly-language implementation.

To be powerful enough to provide for all the Lisp needs of the MIT AI Lab
for the forseeable future.  This means a large address space, a fairly
powerful processor, and a fast disk, unlike most personal computers which
are designed to minimize cost.  It also means the ability to attach special
purpose hardware for applications such as image processing, robotics, and
VLSI design.

To be flexible down to the most primitive levels of the system, so that it
can be a vehicle for experimentation with new ideas and so that the language
and system can evolve rapidly and can be transported to new generations
of Lisp machine hardware.

       2. What  are the distinguishing features of your language and/or
	  programming environment?

I won't make this report over-lengthy by listing all the features.  In addition
to conventional Lisp, we support object-oriented programming (flavors) and
many other language extensions.  A language manual is available from the AI Lab
and has just been extensively revised.

The system is integrated, meaning that the editor, the compiler, the
system, and the user program all run in the same address space and are all
written in Lisp.  Much of the system is designed to be extended and
customized by the user.  The system includes very powerful programming
tools.

The system is much larger than other Lisp systems, and is not portable
because many of its features are predicated on the Lisp machine hardware
and could not be implemented on a conventional computer without a large
penalty in efficiency.  The simpler language features not subject to this
limitation have been back-converted to Maclisp and adopted into NIL.

       3. Is your system operational? If yes, on what hardware?  If no,
	  when do you expect to be operational, and on what?

Lisp Machine Lisp has been under development since 1974 and has been
operational for about 3 years.  Currently there are about 22 CADR
machines in service, at MIT (Artificial Intelligence Lab, Laboratory for
Computer Science Macsyma group, Electrical Engineering & Computer Science
department, Speech Lab), at XEROX PARC, and at LMI.  More are under
construction.  Almost all of the Lisp work at the AI Lab has migrated from
the pdp-10 to the Lisp machines.

       4. What are your present plans for further development?  Include
	  estimated milestone dates, if possible.

The system cannot yet be considered mature, and is still under active
development, largely in the areas of user interface, programming tools, and
more efficient garbage collection in a large address space.  Two companies
(LMI and Symbolics) were formed in the past year to develop and market Lisp
machine Lisp (hardware, software, documentation, and applications).
Extensive improvements to Lisp machine Lisp should be forthcoming from
these companies during the next year.

∂01-Apr-81  1125	Scott.Fahlman at CMU-10A 	Spice Lisp Status  
Date: 31 March 1981 2341-EST (Tuesday)
From: Scott.Fahlman at CMU-10A
To: engelmore at USC-ISIB
Subject:  Spice Lisp Status
Message-Id: <31Mar81 234102 SF50@CMU-10A>
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  1 Apr 1981


The following document gives an overview of Spice Lisp and describes
our current status and future plans.  I just finished revising this
document, so it's up to date.  If anyone out there has a Dover and
would prefer to read this in hard copy, just FTP the file
TEMP:SLISP.PRE[C380SF50] from CMUA.

-- Scott Fahlman

--------------------------------------------------------------------


                          CARNEGIE-MELLON UNIVERSITY

                        DEPARTMENT OF COMPUTER SCIENCE

                                 SPICE PROJECT

                            OVERVIEW OF SPICE LISP


                               Scott E. Fahlman

                                 31 March 1981




                             Spice Document S013/2
             Keywords and index categories: PE Lisp & DS External
              Location of machine-readable file: SLISP.MSS @ CMUA




                 Copyright (C) 1981 Carnegie-Mellon University




  Supported  by  the  Defense  Advanced Research Projects Agency, Department of
Defense, ARPA Order 3597, monitored by the Air Force Avionics Laboratory  under
contract  F33615-78-C-1551.    The  views  and  conclusions  contained  in this
document are those of the authors and should not be interpreted as representing
the o⊃cial policies, either expressed  or  implied,  of  the  Defense  Advanced
Research Projects Agency or the U.S. Government.



1. Introduction to Spice
  Most  of  us in the Coputer Science Department of Carnegie-Mellon University,
and many of our colleagues elsewhere, believe that the era of  time-sharing  is
ending,  killed  by  the  decreasing  cost of computers relative to the cost of
skilled people to use them.  We believe that the real challenge of  the  1980's
is  not  how to squeeze the last few cycles of performance from a shared super-
machine, but rather how to use plentiful computing cycles to make each computer
user more productive.  We believe that the best way to achive this goal  is  to
make  available to each user a personal computer that is comparable in power to
today's large time-sharing machines, and  to  couple  this  with  an  excellent
display,  a  high-bandwidth network and, most important of all, a comprehensive
integrated software environment.  The Lisp Machine effort at MIT and the Dorado
effort at Xerox PARC are steps in the right direction, but many more steps must
be taken before we will have the kind of computing environment that  will  best
meet our research needs through the end of the decade.

  The  Spice  project  is  CMU's  attempt  to  produce  an  integrated software
environment for powerful personal computers.  Early on we decided not to  build
our  own  machines,  since we are ill-equipped to do that on a large scale, and
since a computing environment tied to home-brew equipment tends to be of little
direct use to the rest of the world.  Instead, we have followed  a  two-pronged
course, attempting to interest and assist manufacturers in building the kind of
personal machines we want, and attempting to build the Spice software system in
a way that will make it easily portable to any microcodable personal machine of
sufficient  power.    Since  the effort is department-wide, we felt that it was
inappropriate  to  build  Spice  around  any  single  computing  language;  our
community  includes  both  Lisp  users  and  people  who  prefer  the algebraic
languages, and we feel sure that new languages, with new  constituencies,  will
be appearing as time goes on.

  The minimum requirements for a Spice machine are specified in detail in other
documents,  but  briefly they are as follows: a speed of 1 MIPS, something like
16K instructions of writable control store, 1 Mbyte of main memory, 100  Mbytes
of  local  secondary  storage, a large virtual address space, a high-resolution
display, and an Ethernet or equivalent.   These  are  minimum  values;  we  can
obviously  make  good  use  of anything beyond these figures, but Spice will be
designed to run well on the minimum machine decribed here.  At  present,  among
available  machines, only the Lisp Machine and perhaps the Dorado come close to
these specifications; the Perq, as it stands now,  falls  short,  but  hardware
extensions  are  planned  by  Three Rivers Computer Corp. to bring it up to our
minimum Spice requirements.  We have strong reason to believe that other  Spice
machines  will be appearing soon, from a number of manufacturers, and that some
of these will be offered at relatively attractive  prices.    Our  emphasis  on
portability  is  dictated  in  part  by  our  desire  to  make  use of the most
attractive machine at any given time, and to allow for the use of a mixture  of
machines,  with  a  range of performances, at any given site.  To run different
software on each machine would drive the users and software maintainers crazy.

  Spice will be developed in several stages between now and 1985.   Ultimately,
it will contain at least the following subsystems:

   - A  simple  kernel  providing for multiple processes, a simple process
     scheduler, low-level  I/O,  and  management  of  a  separate  virtual
     address  space for each process.  The kernel also supports a powerful
     message-based inter-process communication protocol (IPC).

   - A central file system, accessible by all machines over the  Ethernet.
     Local  disks will be used as file caches and to implement the virtual
     memory system for each machine.

   - A comprehensive programming environment for the Ada language.

   - A comprehensive programming environment for Spice Lisp.

   - A data-base  system  that  will  integrate  the  libraries  for  both
     languages,  and  also  provide  for  personal filing, online handbook
     information, etc.  This may use video disk technology.

   - A comprehensive system for editing and document production.

   - A multi-media message system, which will allow users  to  communicate
     using any combination of text, pictures, and voice.

   - A   package  for  display  management,  graphics,  and  for  building
     excellent user interfaces.  Most other Spice software will  use  this
     system,  and  therefore will present a common style of interaction to
     the user.

   - A large number  of  application  programs  resulting  from  non-Spice
     research within our department: AI applications, IC design, etc.

  Four techniques are being used to enhance the portability of Spice:

   1. All  of Spice runs on a well-defined, microcoded virtual machine, or
      rather on a set  of  them,  one  for  each  language  system.    The
      Pascal/Ada system runs on a virtual machine that directly interprets
      P-Codes;  the  Lisp  virtual machine interprets a special byte-coded
      instruction set and also handles Lisp's memory managment and garbage
      collection.  Some essential pieces of the  kernel  are  also  micro-
      coded.  The simplest way to port Spice to a new microcodable machine
      is  simply  to  re-create  the  virtual  machines by rewriting a few
      thousand words of microcode, a task that should take  well  under  a
      man-year  for  any reasonable machine.  Of course, some re-tuning of
      the system will also be required to get maximum performance on  each
      machine.

   2. All  of the Spice system that lies outside of the microcode is being
      written in Pascal, Ada, and Spice Lisp.  Portability at  this  level
      should   be  possible  even  to  non-microcodable  machines,  though
      performance may suffer.

   3. A serious effort is being made to produce a simple, well-documented,
      maintainable system.  We at CMU plan to live with Spice for  a  long
      time  and  to port it to several machines over its lifetime; we have
      considerable incentive not to let it turn into a rat's nest,  as  so
      many  systems  have  in  the  past.   All of the major people on the
      project, without exception, have a deep commitment to good  software
      engineering and the experience to put that commitment into pratice.

   4. The  IPC  protocols make it possible for disparate processes to work
      together, even if they are written in different languages or run  on
      different  machines.    The  same  IPC that will be used by Spice is
      already running under Vax/Unix and is tied into  Franz  Lisp.    IPC
      provides  a  common  interface  to  all  servers in the Spice world,
      including the file system, and it insulates the user-level code from
      the low-level network protocols, which may vary with the  choice  of
      hardware.

2. Introduction to Spice Lisp
  Lisp  is  a  language  that  offers  many  advantages  for work in artificial
intelligence and  other  applications  of  symbolic  computation.    Lisp  also
provides   the   best   currently   available  environment  for  supporting  an
interactive, experimental, and evolutionary style of programming.   This  makes
it  a  good  choice  as  an  implementation  language  for  applications  whose
specifications will evolve over time and which cannot  be  cast  into  concrete
before  they  are programmed and used.  Finally, because Lisp is easy to extend
and modify, it is an excellent substrate upon  which  to  build  and  test  new
languages and systems.

  It  is  essential, therefore, that the Spice environment include a first-rate
Lisp system, and that this system be able to communicate smoothly with programs
and utilities written in other languages.  This  system,  Spice  Lisp,  is  now
under  construction.    Since Lisp handles memory allocation, variable binding,
function calling, loading, and other things in a  fundamentally  different  way
than  Pascal,  Ada,  and  other languages of the Algol family, we do not try to
force these systems to coexist within a single address space.    Instead,  each
Lisp  process has its own large virtual address space(Ideally the address space
within each process should be 32 bits, but Spice Lisp can be run in  a  smaller
space  if  necessary  on  a  given  machine.), and communicates with subsystems
written in  other  languages  through  message-passing,  using  the  Spice  IPC
protocols.    IPC messages are also used for all I/O and for communicating with
the Spice kernel.

  Spice Lisp bears a strong family resemblance  to  Maclisp  and  Lisp  Machine
Lisp:  it  is  (approximately)  a  superset  of  the former and a subset of the
latter.  Our microcoded virtual machine is similar to, but simpler  than,  that
of  the  Lisp  Machine.   The simplifications were dictated by our desire for a
small, clean, relatively conservative Lisp system that  would  be  portable  to
many  machines,  some  less  powerful  than  the  Lisp Machine.  Among the Lisp
Machine features that we have left out (for now, at least)  are  stack  groups,
user-defined areas for storage allocation, and flavors.

  It should be very easy to transport code from Maclisp, Franz Lisp, and NIL to
Spice  Lisp.    We  plan  to  provide tools to make this translation as easy as
possible, and may be able to automate it altogether.  Interlisp  and  UCI  Lisp
are  farther  from our language, but we plan to build some translation aids for
these languages as well.

  The native Spice Lisp programming environment will  be  in  the  Maclisp/Lisp
Machine  Lisp  style: editing will be done on the ASCII form of the code, using
an Emacs-like editor that  understands  Lisp  syntax.    This  editor  will  be
implemented  in  Spice  Lisp,  and will also serve as the user interface to the
read-eval-print loop.  We believe that the problems  inherent  in  S-Expression
editing far outweigh the few advantages of this style.  However, we are looking
into  the  possiblity  of  building,  or  getting  someone  else  to  build, an
Interlisp-like environment on top of Spice Lisp for those users who prefer this
to the native Spice Lisp environment, or who want to use it temporarily  during
a period of transition to Spice Lisp.

3. Features of Spice Lisp

   - Spice  Lisp  functions  can  be  stored  as  list structure or can be
     compiled into more compact and efficient byte codes.    Functions  in
     these  two  forms  can be intermixed freely for execution.  The byte-
     coded instruction set is designed especially for Lisp, and is similar
     to that on the Lisp Machine.

   - Spice Lisp is run on a microcoded  virtual  machine  that  implements
     storage  management,  garbage  collection, and an interpreter for the
     byte-coded instruction set.  This minimal  virtual  machine  occupies
     about  4K  micro-instructions  on  the PERQ, which would translate to
     about 2K instructions on the more powerful Lisp Machine.

   - The virtual machine design allows for a  number  of  features  to  be
     implemented  either  in  microcode  or in macrocode, depending on the
     microcode space available in a given machine.  Among  these  features
     are floating-point and bignum arithmetic, array accessing, and a host
     of commonly-used functions such as PUTPROP and GET.  To microcode all
     of these on the PERQ would add perhaps another 2K - 3K.

   - Spice  Lisp,  in  the  Perq implementation, uses 32-bit lisp objects,
     each with a  4-bit  type  code.    Immediate-type  objects,  such  as
     fixnums,  contain  28  bits  of  data; pointer-type objects currently
     contain  a  24-bit  pointer  to a 32-bit word, but this can easily be
     extended to 26 bits.  Since every  data  type  and  every  allocation
     space  has  its  own  24-bit  address space, the addressable space is
     actually many times the apparent limit of 64  Mbytes;  the  microcode
     converts the 24-bit pointer into a 30-bit virtual address.

   - Spice  Lisp  supports the following primitive data types: cons cells,
     symbols, numbers, compiled  function  objects,  strings,  vectors  of
     arbitrary lisp objects, packed vectors of numbers or bits, and arrays
     of  any number of dimensions.  The primitive number types include 28-
     bit fixnums,  infinite-precision  bignums,  short  (28-bit)  floating
     point  numbers,  and  long  (96-bit) flaoting point numbers.  We also
     support a DEFSTRUCT package which allows users  to  build  their  own
     complex record structures out of vectors.

   - Garbage  collection  uses  a  modification  of  the incremental Baker
     scheme: accessible objects are continously copied  from  oldspace  to
     newspace,  and are compacted in the process.  When oldspace is empty,
     the spaces are flipped and the process begins again.   We  anticipate
     that,   to  reduce  paging,  most  users  will  turn  off  continuous
     collection and  just  do  a  big  GC  from  time  to  time,  but  the
     incremental  scheme is there if you need it.  To improve performance,
     the user can choose to allocate permanent objects  in  STATIC  space,
     which  is  never  collected,  or in READ-ONLY space, which is neither
     modified nor  collected,  and  is  sharable  with  other  Spice  Lisp
     processes.   We do not support the definition of new storage areas by
     users.

   - Spice Lisp uses a fairly standard shallow-binding scheme.  We support
     Lisp Machine style closures, in which a function  is  closed  over  a
     specified  set  of  variables.    The  Spice  Lisp interpreter treats
     special and local variables just as in compiled code,  so  that  code
     will  not  mysteriously  break  when it is compiled.  (In interpreted
     code, local variables are no faster to use than specials, but we feel
     that it is important  to  keep  the  semantics  of  interpreted  code
     identical to that of compiled code for proper debugging.)

   - Spice  Lisp  implements  CATCH and THROW, but not spaghetti stacks or
     stack groups.  Most small-scale, intra-Lisp context switches  can  be
     handled  with  closures; large-scale multi-processing will be handled
     by creating several distinct Lisp  processes  and  communicating  via
     IPC.    We  may add stack groups later if we find that they are still
     needed.

   - Optional arguments to functions, with default values, are  supported,
     as are "rest" arguments.  Multiple value returns are supported, as on
     the  Lisp Machine; the user who does not like these can simply ignore
     this feature without getting into any trouble.

   - Spice Lisp contains a "package" system similar to that  on  the  Lisp
     Machine.   This allows the symbols used in one subsystem to be hidden
     (on a separate obarrray) from the symbols used in  other  subsystems.
     Some  such  feature  is  essential  if  a  large number of subsystems
     written by many  people  are  to  be  used  together  without  naming
     conflicts.

   - A complete user's manual and a complete virtual machine specification
     are  being  written  now,  and will be kept up to date as the primary
     documentation  for   the   system.      Most   of   the   interesting
     incompatibilites  between  Spice  Lisp and other popular lisp systems
     are noted in the user's manual.

4. Plans and Schedules
  The work on Spice Lisp began in earnest in the late spring of 1980.  At  that
time,  the  decision  was  made  to  use  the  Perq  as the machine for initial
development of the system, since no better personal machine  was  available  to
us.    From  the  start,  our  design  was  aimed  at an extended Perq with 16K
microstore, of which we planned to use about 6K  for  the  Spice  Lisp  virtual
machine.    However,  it gradually became apparent to us that no Perqs would be
delivered until late summer of 1980, and no 16K Perqs until the fall  of  1981.
We decided to go ahead with the microcoding effort, using an emulator until the
physical  Perqs  arrived.    We  also  decided  that  we  would try to make the
microcode fit into the 4K Perqs, but that we were not willing to compromise the
basic design in order to accomplish this.  At no time was it ever our  goal  to
have this lisp run at reasonable speed on the existing small Perqs; using these
machines  was merely a stopgap development strategy, to focus our efforts while
we waited for true Spice machines.

  During the summer of 1980 we were able to complete the specification for most
of the Spice Lisp virtual machine, microcode this design, and debug the code on
the Perq emulator.  At present -- spring of 1981 -- the user's manual for Spice
Lisp is nearing completion, the byte-code compiler is nearing  completion,  and
the  coding  of  the parts of the system written in Lisp is well underway.  Our
plans called for the system to be turning over, in a primitive way, by June  of
1981,  and  to  be a relatively comfortable system to use by September of 1981.
The software effort seems to be on schedule.

  Unfortunately, we have outgrown the 4K Perq.  Our current microcode  load  is
under  the 4K limit, but it is too large to coexist with the microcode-resident
parts of the Spice kernel or with  any  scaffolding  that  we  could  build  to
replace  this.    In  part, this is due to some design errors in the Perq which
cost us a great deal in microcode space.  These errors will be corrected in the
extended 16K Perq, and we remain confident that Spice Lisp  (and  the  rest  of
Spice)  will run quite well in that machine.  We could make Spice Lisp run very
slowly in the existing Perq through the  use  of  microcode  overlays  and  the
removal  of some features, but this would be a large effort and we see no point
in doing this.  We have instead redirected our  effort  toward  the  16K  Perq,
which is scheduled for delivery in fall of 1981.

  In the meantime, we plan to implement our virtual machine, now implemented as
about  4K  of  Perq  microinstructions,  in  some portable high-level langauge,
probably Common Bliss.  This will allow us to run the entire Spice Lisp  system
on the Dec-20 and on the Vax under both Unix and VMS.  Once this version is up,
our  development  effort  for  the Spice Lisp environment will proceed on these
machines until the extended Perq (or some other Spice machine) is available  to
us.    Following  the  lead  of  Chuck  Hedrick  at Rutgers, we plan to use the
extended addressing mode of the Dec-20 in the implementation for that  machine.
The  choice  of  Bliss  is  dictated  by  the availability of a good optimizing
compiler and reasonable ways of accessing the underlying machine; at some  time
in  the  future,  when a decent compiler is available, we will probably convert
this system to Ada.  The virtual-machine program will be rather  small;  it  is
the  interface  to  the various operating systems and the simulation of some of
the Spice environment that will be tricky.

  In addition to providing us with a temporary  development  environment,  this
re-implementation  will have the useful side effect of providing a full-fledged
Spice Lisp system on 20's and Vaxen.  This system will certainly be useful  for
instructional  purposes,  and  may be fast enough to be useful for real work on
these machines, though it still wil not  be  as  fast  as  it  would  be  on  a
microcodable  machine.    The  interpretation  of  byte codes will slow us down
somewhat, as compared with Lisps that compile to native  instructions  for  the
same machines, but the compactness of the byte codes and the compacting garbage
collector should give us good paging characteristics.  It is unclear to us what
the speed of the resulting system will be.

  In  addition  to  the  extended  Perq, we are looking into the possibility of
getting some number of  Lisp  Machines  and/or  Dorados,  to  provide  a  high-
performance  vehicle  for Spice.  (The Dorado would only be attractive to us if
we could get it with a 16K microstore.)  Even though  the  Lisp  Machine  comes
with  an  excellent  Lisp system of its own, we would probably run Spice on our
machines so that they would fit in with the other machines in our environment.

  Our current timetable is as follows:

June, 1981      The Spice Lisp User's Manual, the byte-code  compiler  (running
                in  Maclisp  for now) are complete.  The essential parts of the
                Lisp-level code is complete.

July, 1981      The Bliss version of the virtual machine is up on  the  Dec-20.
                All  of  the  Lisp-level  parts  of  the  Spice Lisp system are
                written, and is being debugged on the Dec-20 virtual machine.

September, 1981 
                The Bliss version of the virtual machine  is  up  on  Vax/Unix.
                Spice  Lisp  is complete and running on the Bliss version.  The
                virtual machine for the 16K  Perq  is  complete  and  has  been
                bebugged  on  the  Perq  emulator.    A  number  of  user-level
                environment packages are up, providing a reasonably comfortable
                environment.

December, 1981  Spice Lisp is running on the 16K  Perq,  and  is  beginning  to
                acquire real users.

1982            Development   of   the   Spice  Lisp  environment  and  library
                continues.  The integration of the  system  with  the  rest  of
                Spice  is improved as the system is used.  Spice is ported to a
                second, more powerful personal machine.

  The Spice Lisp project is being jointly coordinated by Scott Fahlman and  Guy
Steele.    Other  members  of the group are Gail Kaiser, Walter van Roggen, Joe
Ginder, Leonard Zubkoff, and Carl Ebeling, all CMU-CSD grad students, and  Neal
Feinberg, a CMU undergraduate.

∂01-Apr-81  1346	ENGELMORE at USC-ISI 	Agenda for Lisp Meeting
Date: 1 Apr 1981 1333-PST
Sender: ENGELMORE at USC-ISI
Subject: Agenda for Lisp Meeting
From: ENGELMORE at USC-ISI
To: Kahn, Adams, 
To: Yonke at BBN, Zdybel at BBN, 
To: Wilson at CCA, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI] 1-Apr-81 13:33:12.ENGELMORE>

                              LISP MEETING AGENDA

[Note: Names in parentheses indicate the disussion leader for that topic.
If your name is included and you would rather have someone else do it, 
let me know.]

0830   Introduction and Welcome (DARPA program managers)
            Why we're here, what we want to accomplish today,
            How the meeting will proceed

0845   Scenarios for future Lisp Development (Engelmore)
            These scenarios will be used to focus the discussion on
            future developments:

            1) The status quo: Interlisps on DEC machines, D-0,
                Foonlys, Jericho, Maclisps on LMI and Symbolics
                machines, NIL on VAX, S1, Nu (maybe), UCI Lisp
                and Utah Lisp on assorted machines. I. e., many
                dialects on many different systems with subcritical
                support in most instances.

            2) Merge an Interlisp-like capability into another dialect,
                e.g. CADR Lisp, NIL or Standard Lisp.

            3) One kernel Lisp that supports multiple Lisp VMs (e.g.
                Interlisp and CADR Lisp).  Portability of the kernel
                would be emphasized.

            4) Dialect portability, either via the implementation
                language, e.g. Pascal or C, or via the Lisp kernel.

            5) Other scenarios suggested by participants.

0945   Discussion of status reports from implementers (Hearn)

1015   Coffee Break

1030   Discussion of User requirements, Present and Future (Green)
            Language
            Programming environment
            User interface
            Networking
            Portability

1145   A short meta-discussion (Engelmore)
            Possible revision of afternoon's agenda in light of
            morning's discussion.

1200   Lunch

1315   Discussion of Programming Environments (Balzer)
            What should be in it?
            Should it all be in Lisp?
            If not, how would the Lisp and non-Lisp parts be interfaced?
            What display, network, mail, database access and other
               capabilities should be a part of the environment?

1430   Portability issues (Jon L White)
            How do we get it?
            How important is it?

1510   Coffee Break

1520   Feasibility issues (Hendrix)
            What needs to be done?
            Who's going to do it?

1600   Software engineering issues (Feigenbaum)
            How can we avoid the kinds of difficulties encountered
            in implementing Interlisp on D-0 and VAX?
            How do we get independence from the operating system?
            How does one tune for efficiency?

1700   Wrap-up (Engelmore)
            Assessment of consensus on a scenario for the future
            of Lisp.

1800   Meeting adjourned, maybe
            We're keeping open the option of resuming the discussion
            after dinner.


-rse

∂01-Apr-81  2017	ENGELMORE at USC-ISI 	Lisp meeting: bring cash    
Date: 1 Apr 1981 2005-PST
Sender: ENGELMORE at USC-ISI
Subject: Lisp meeting: bring cash
From: ENGELMORE at USC-ISI
To: Kahn, Adams, 
To: Yonke at BBN, Zdybel at BBN, 
To: Wilson at CCA, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI] 1-Apr-81 20:05:50.ENGELMORE>

I forgot to mention that this meeting is not free.  There will be
a  charge  of  six  dollars  to  cover  the  cost  of  lunch  and
refreshments.   The  money  will be collected at the start of the
day's festivities.  No credit cards, please.

Gary Hendrix will  provide  descriptive  information  on  exactly
where  the  meeting  will  take  place,  and  perhaps  even  some
procedural information for finding it.

Looking forward to seeing you there,
Bob

∂02-Apr-81  0749	ENGELMORE at USC-ISI 	Lisp meeting reports   
Date: 2 Apr 1981 0712-PST
Sender: ENGELMORE at USC-ISI
Subject: Lisp meeting reports
Subject: [CSVAX.fateman at Berkeley: updated Franz report]
Subject: [SHOSTAK at SRI-KL: Position Paper on Future of LISP]
Subject: [Griss at UTAH-20 (Martin.Griss): SYSTOOL.DOC]
From: ENGELMORE at USC-ISI
To: Kahn, Adams, 
To: Yonke at BBN, Zdybel at BBN, 
To: Wilson at CCA, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-UNIX, Sowizrel at RAND-UNIX, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI] 2-Apr-81 07:12:55.ENGELMORE>

Here's today's installment of reports: Franz Lisp (updated), Standard Lisp,
and a position paper from SRI.  I'm encouraged by the large amount of
thoughtful activity that this meeting has generated already.  -rse
	
Begin forwarded messages
Mail-From: ARPANET host BERKELEY rcvd at 1-Apr-81 1601-PST
Date: 1 Apr 1981 15:27:48-PST
From: CSVAX.fateman at Berkeley
To: engelmore@usc-isi
Subject: updated Franz report

This  is  a  longer  and more detailed version of the earlier report, providing
information more nearly comparable to subsequent distributed reports.

   1. Describe your project.

Franz  Lisp  is  a  Maclisp-like  lisp  system  that was written at UC Berkeley
primarily to support the Macsyma algebraic manipulation system on large-address
space machines, and specifically the VAX in the UNIX environment.

   2. What are the distinguishing features of your language and/or
   programming environment?

Franz  is  written primarily in C (16,000 lines), but with a large part in lisp
(3,000 lines) and a tiny part in assembler (300 lines).

Franz  uses 32 bit pointers (on the VAX) and supports these data types: string,
symbol,  fixnum, cons, flonum (double precision), array (very general), bignum,
value  and hunks. Each page can contain only one type and thus the address of a
datum  determines the type. A conventional mark and sweep garbage collection is
used.  `Pure'  spaces exist so that compiled code literals need never be looked
at by the garbage collector.

Franz  runs on a Vax 11/780 and does not require microcode support under either
the  Unix  or VMS operating systems. We have reason to believe that at least on
the 11/780, no benefit would be obtained by microcode.

Franz  has  a  clean  interface  to  Fortran  77, C, Pascal, and other language
systems which conform to the usual UNIX call conventions.

The UNIX program profiling tools, etc. are operational with Franz.

No spaghetti stack; only primitive funarg handling.

Franz on a VAX 11/780 runs maclisp programs about 3-6 times slower than a KL-10
(mit-mc's). Although figures are hard to compare, it appears that the 11/780 is
substantially faster than a CADR running Macsyma.

Run time environment is close to maclisp (thus, less elaborate than Interlisp).

A  byte lisp compiler and interpreter were written for Franz, but are currently
not used since compiled lisp proved to be more efficient.

Franz  Lisp  is  distributed as part of the Berkeley Vax software distribution,
which  has  reached  over  100  sites  so  far.  The act of preparing Franz for
distribution  every half year or so has had a very good effect on the software:
everything  is  documented,  lingering bugs are fixed and any site dependencies
are  removed.  Some  of  the  sites running Franz have enchanced the system for
their  particular  needs,  for  example:  CMU  added  IPC,  and Bell Labs added
evalhook.

There  is  a  compiler,  "Liszt"  which  (under  flag control) understands many
features  of  Maclisp,  UCI  lisp,  and Interlisp. Thus the compiler provides a
mapping  from other lisps to "Franz" so that files with different host dialects
can be mixed, subject to name conflict problems.

Franz  should  be  easily  transportable  to  other  systems  with sufficiently
powerful  system  facilities,  and  a  C  compiler.  Each  such transport would
generally require a rewriting of Liszt's code generator.

   Details of the Franz Lisp compiler: Liszt

   Liszt  is  a  one  pass  compiler  with  peephole  optimizer which generates
Unix-style Vax assembler language output. Liszt is 3785 lines of lisp code. The
following  functions  are handled in a special way by the compiler, (in general
this means that the function is open coded):

and  arg atom bigp bcdp *catch comment cond cons cxr declare do dtpr eq equal =
errset  fixp  floatp  get  getd  getdata  go list map mapc mapcan mapcar mapcon
maplist  memq  not  null  numberp or prog progn prog1 prog2 quote return rplaca
rplacd rplacx setarg setq stringp symbolp symeval *throw typep zerop + - * / 1+
1- \\ < >

There  are  two calling sequences. The primary one allows compiled code to call
any  compiled or interpreted function. Calling is done by indirecting through a
transfer table. Initially, calls using the transfer table go through a function
linkage  routine.  The function linkage routine will, under flag control, alter
the  transfer  table  so  that  subseqent calls through the transfer table will
bypass  the  function  linkage routine. The second calling sequence is used for
calls  made within a file. This calling sequence uses the faster Vax subroutine
call instruction (jsb).


Liszt  is  composed  of  four source files: one macro file, camacs.l, and three
other files, car.l, cadr.l and cddr.l. The sizes and times (in seconds on a VAX
11/780) that it takes to compile each of these files is shown in this table:

compile	assm	lines	words	chars	file
time	time
3.1	1.3    	139    	561   	3784 	camacs.l
30.5	17.7    908   	3353  	25840 	car.l
30.1	16.8   1298   	6132  	40874 	cadr.l
34.9	20.4   1440   	6604  	42995 	cddr.l
-------------------------------------------------
98.6	56.2   3785  	16650 	113493 	total

The size of the compiled compiler is 108077 bytes.  The size of the interpreted
compiler is:
   15017 cons cells
     632 symbols
         + an insignificant amount of other data types.


   Although  Liszt  is not a truly portable Franz Lisp compiler, it was written
in  such  a  way  that  it could be easily rewritten to generate code for other
machines.  Only 128 lines of the compiler deal specifically with the VAX 11/780
instruction  set,  and  just  the  low  level  routines  understand the various
addressing modes available on the Vax.

   The compiler has been well tested by, among other things, compiling the huge
MACSYMA system.

   3. Is your system operational? If yes, on what hardware? If no,
   when do you expect to be operational, and on what?

Franz  has  been running under the VAX/UNIX environment since October, 1978. It
was moved to VAX/VMS in about 3 weeks (April, 1980). It has been distributed to
all  (106  as  of  4/1/81)  "4BSD UNIX" sites. and some unknown number (>5) VMS
systems. It has been running at some non-Berkeley sites since January, 1979. It
runs unchanged on 11/750 and smaller VAX systems.

We  are  not  restricting  distribution  of  the  source  and intend to provide
distribution of enhancements given to us without restriction.


   4. What are your present plans for further development? Include
   estimated milestone dates, if possible.

There  is  no  further  major  development of Franz necessary for our immediate
goals  in  building  an  integrated  scientific  environment,  although various
activities concerned with tuning, fixing bugs, etc, are still being funded at a
low  level  by  the  Dept.  of  Energy  Applied Math Sciences program. No major
changes have been made in at least a year to the VAX code.

We  do  anticipate further development of our environment, but our intention is
to  provide  these  enhancements  in  the  operating  system,  rather than in a
language-specific way, since Lisp is not the sole, or even primary, application
or implementation language in UNIX.

Because  of  interest  in  personal  computers, we may be transporting Franz to
MC68000 UNIX; we may also set up a system for IBM style computers.




          --------------------
Mail-From: ARPANET host SRI-KL rcvd at 1-Apr-81 1618-PST
Date:  1 Apr 1981 1619-PST
From: SHOSTAK at SRI-KL
To: engelmore at USC-ISI
Subject: Position Paper on Future of LISP

Bob-
     Following is a position paper I have written on behalf of SRI
that presents our feelings about how the cost of language support
can be controlled in the '80s.  As you will, we make the case
for portability of implementation.  We are still much interested in
proving the concept with a portable Interlisp implementation based
on a virtual machine concept. 
     I hope to express the views propounded in this paper at the
LISP meeting.
     -Rob
            A Position on LISP Support for the 80's

                       R. Shostak
                         3/80

     For the past decade the LISP language has served as the mainstay
of AI and other computer science research in this country.  Indeed,
had it been deemed appropriate to discontinue such research during this
period, it would only have been necessary to scan the Arpanet Site Map,
training missiles on those sites sporting at least one DEC-10 and an
implementation of some LISP dialect.  The ubiquity and stability of the 10 
as a research vehicle made it possible to share LISPs and LISP-based
tools in a manner never before experienced in the 25 year history of	
the language.  
     As the 70s, progressed, however, it became clear that this stability
was not to last forever.  First, programs quickly grew sufficiently large
to stress the meager (by today's standards) bounds of the 18-bit
virtual address space.  Second, dramatic advances in LSI fabrication
technology, and dynamic memory technology in particular, made it possible
to begin thinking about personal computers as serious contenders for the
research capital equipment dollar.
     The problem of insufficient address space headroom was recognized
as early as 1974 with the addition of overlay features for compiled code
in INTERLISP-10.  As the decade unfolded, it became clear that such
measures represented at best a stop-gap solution to the address space
problem.  In the late 70's, researchers developing large systems began
spending intolerably great fractions of their time to circumvent space
limitations.
     The 70's also gave rise, for the first time, to the possibility
of placing LISP in a personal computer setting.  The LISP MACHINE project
spearheaded by Greenblatt and Knight at MIT was among the first steps
in this direction.  The Alto BYTELISP project originated by Deutsch at
Xerox around 1973 soon after became the first transportable LISP 
architecture for personal computers.  At present, at least half a dozen
personal machines that can support LISP are either available or
in various stages of development.
     As the 80's unfold, it is clear that the era of the PDP-10 is
nearly over and that less expensive time-shared machines (such as the
VAX) and the new generations of personal machines are destined to
become the new workhorses.  It seems equally clear, however, that no
single machine architecture is likely to emerge, at least for the
foreseeable future, as a new defacto standard.  The research
community and its sponsors may be faced with the potentially critical
problem of providing language support for a myriad of new machines.
As the present decade progresses, moreover, and as new and more
powerful micro-processors arrive on the scene- each with a shorter
"half-life", perhaps, than its predecessors- the proliferation
problem will sharpen.
     We believe that the best and perhaps only feasible approach
to the control of LISP support costs in the 80's lies in the
promotion of transportable and robust implementations.  We believe
that the the proliferation of target machines is inevitable, if not
ultimately for the good, and that the continuing dramatic reductions
in the cost of designing and introducing new hardware will not be
matched by corresponding reductions in the cost of new software.  It
is thus imperative to increase the durability of new implementations
as much as possible.  
     Later in this document, we argue that such durability can in
fact be achieved.  Before doing so, however, we consider some
alternative approaches and argue as to why we feel they are not
appropriate.


1.  Abandon All But a Single Dialect  (the Fahrenheit 451 Approach)

     A number of LISP dialects are currently in use by the research
community or are under development.  One approach to the computing
resources problem is simply to declare one (or perhaps two) of these
to be the "official" dialects and to discourage the development and
use of others.  We feel that such an approach would not only fail to
solve the problem, but would be both extremely costly in the short
run and dangerous in the long run.
     The premise on which this approach is based is the so-called
"n times m" argument-  if we have n machines and m dialects, we will
be obliged to support nm implementations; if we cut m down to 1,
therefore, we will have made great progress.  While this argument
would seem to have all the authority of computational complexity
theory behind it, it is in fact founded on a false assumption-  it
is not the case, nor has it ever been the case, that all dialects
exist on all machines; very few machines, in fact, support more than
one serious LISP implementation, and perhaps only the PDP-10 (and its
derivatives) supports more than two.  The proliferation we are
facing is a proliferation of hardware alternatives, not of dialects.
    The phasing out of either of the two most well-established
dialects- INTERLISP and MACLISP- would have enormous cost.  Large
investments have been made over the last decade both in programming
environments and and in applications programs.  The cost of
transporting these tools and programs, say from INTERLISP to MACLISP,
would be formidable.  Specific evidence is provided by the recoding of
SRI's JOVIAL Verification System from INTERLISP into MACLISP that was
undertaken by CSL under RADC sponsorship.  Even with the use of
mechanical translation aids, the effort required several hundred man
hours of labor to transport a program of less than 200 pages.  Indeed,
the translation effort was not much less costly than the original
implementation.  It is true that much of the time spent was for
education purposes (relatively few researchers are equally conversant
in INTERLISP and MACLISP).  Nevertheless, it is inevitable that an
effort to transport a useful part of the INTERLISP programming
environment itself would be vastly more expensive, even assuming it
were technically feasible.  (The stack and non-local control
primitives on which much of this environment depends are not present
in MACLISP and its derivatives, even disregarding multiple
environments.)
     We feel that the dialect abandonment approach would have serious
long-run dangers as well.  LISP has traditionally been a vehicle for 
research in programming language design.  To discourage the development
and use of new dialects, such as NIL and others, would utimately
be to the detriment of the entire computer science community.
     In any case, new dialects will continue to be introduced and
existing ones utilized irrespective of the policy decisions of any
particular group or interest.  The Xerox Corporation, for example,
is continuing to develop INTERLISP software, and at least two
LISP machine companies are planning to offer derivatives of the MIT
original; efforts such as the NIL project, FRANZ, and VLISP will
continue to be mounted as long as LISP hackers, of which there is no
shortage, inhabit the computing world.


2.  Select a Standard Machine

     The dual approach- equally simplistic- is that of encouraging the
research community to agree upon and adopt a single machine/operating
system combination- (e.g., the coming Symbolics  machine) as a universal
host- a PDP10 for the 80's.  All but one of the objections raised with	
regard to the first approach apply with equal or greater force here.	
Moreover, as we noted earlier, no single machine currently or soon to be
available is likely to retain a state-of-the-art status for very long.
Of course, it is difficult to find a researcher currently struggling
against the load average who would not be delighted to have his or her
own personal machine, regardless of whether that machine is a DORADO,
a CADR MACHINE, an F5, or whatever.  History shows quite clearly, however,
that as new and improved alternatives become available (which will certainly
be the case), existing ones quickly become inadequate, while the new
alternatives become much in demand.


3.  Reduce Incompatibilities Among Existing Dialects

     The idea of bringing the various existing dialects closer together
was considered at some length at the meeting of a LISP discussion group
that convened at MIT last spring.  The first question raised by this
notion was that of what it really means to reduce incompatibilities-
does it mean to change INTERLISP, for example, so that it more
resembles MACLISP?  And if so, would the result be INTERLISP, or merely
yet another LISP dialect (perhaps LARRYMACINTERLISP) that no one would
actually use, or worse yet, that some would.
    The main product of that meeting was a detailed enumeration of
incompatibilities among dialects spanning the range from character sets
to control primitives.  It was amply demonstrated that the dissimilarities,
far from being merely syntactic or mechanically reconcilable, are both
numerous and deep, running to the heart of the separate philosophies of
these languages.  It was concluded, in particular, that an attempt to
construct a common basis for the various dialects- in the form of a
standard low-level LISP implementation interface- would not be practical.	
Nor would be the dual notion- that of a SUPERLISP dialect incorporating
all the "best" features of existing dialects- a PL/1 for LISP.



4.  The Promotion of Portable Implementations

     We believe that the most sensible approach to the control of
LISP support costs as new generations of machines become available is
to promote portablility as much as possible.  If we are to take full
advantage of the state of the art in hardware technology as it evolves,
we have no other choice but to increase the durability and hence
longevity of our software investments.  To a certain extent, of course,
we have been exploiting portability all along- the large body of code
that lies above the Interlisp Virtual Machine interface, for example,
is largely (though not perfectly) common to all of the existing
INTERLISP implementations; the same is true of the existing and coming
LISP MACHINE environments.  As new hardware alternatives become more
concentrated in time, however, and especially as the new generations of
non-microcodable microprocessors become attractive implementation
targets, it will be necessary to place greater emphasis on portable
LISP architectures.
     In view of the undeniable fact that the first wave, at least, of
inexpensive personal machines will not quite match the computing power
of the large machines we are accustomed to, it is fair to ask whether
a useful degree of portability can be retained in the face of the
demands of efficiency.  We believe that it can.  To support this
belief, we offer three cases in point.
     The first case is that of the Bytelisp system begun by L. P. Deutsch
and W. S. Haugland in 1973.  Bytelisp is a transportable LISP system
architecture that implements INTERLISP in complete faithfulness to the
existing TENEX implementation.  The first version of the system, targeted
for the Alto minicomputer, was able to run large programs, though much
too slowly for practical use.  (The Alto, it should be remembered, is
practically a toy by comparison to the coming generation of 32-bit
personal machines.)  However, the subsequent transfer of Bytelisp to
the Dorado (Burton, Masinter, et al.) in combination with extensive changes
to the system, has resulted in an implementation that is reported to 
run five times faster than a single-user KA-10.  The version now running
on the Dolphin, which is perhaps more typical of the coming generation
of affordable hardware, is perfectly adequate.  It should be noted
that a considerable factor in the success of the Dorado implementation
was the performance improvement that was obtained by moving large portions
of the system into LISP itself- as opposed to the lower-level BCPL.
     An even more impressive demonstration of the feasibility of
portable LISP implementations is provided by J. Chailloux's VLISP dialect.
VLISP is the workhorse dialect for serious computer research in France.
It runs, perhaps, on more machines, large and small, than any other single
dialect of LISP.  Among these are the PDP-10 and 20 (under TOPS-10,
TOPS-20, WAITS, IRCAM, and TENEX), the PDP-11 (under RT11 and RSX11), the
TI1600 (BDOS/D), the SOLAR 16 (TSF), the TRS-80 (TRS-DOS) (in simplified
form), and the IMSAI 8080 (CPM).  A Motorola 68000 version sponsored by
INRIA is currently underway for the purpose of supporting a sophisticated
VLSI design facility.  
   The key to VLISP's portability is the use of a low-level virtual
LISP machine concept that is somewhat akin to the Bytelisp idea but is
quite a bit cleaner and is not at all dependent on microcode support.
The effort required to bring this virtual machine to new targets is
measured in man-months rather than man-years. The performance of inter-
preted VLISP code is not merely adequate, but remarkable.  Timing tests
conducted at SRI pitting interpreted VLISP against interpreted INTERLISP
on the KL under TOPS-20 showed VLISP to run about an order of magnitude
faster.  Much more remarkably (given INTERLISP-10's reputation for lack
of interpreted speed,) a recent test comparing interpreted MACLISP (on
the F2), INTERLISP (F2), and VLISP (Radio Shack TRS-80 !) versions of
the Fibonacci function showed VLISP running somewhat faster than
MACLISP and nearly one and a half times as fast as INTERLISP.  It should
be noted, incidentally, that the tail-recursion feature of VLISP was not
a factor in these tests.
     Our point here is not to tout VLISP but to indicate the effectiveness
of the virtual machine concept on which it is predicated.  A recent study
conducted in CSL at SRI (under a subtask of the AI Center's Navelex C2
Workstation Project) suggests that a version of this concept can be used
to implement a fast, portable INTERLISP implementation as well. A
compiler written in INTERLISP that compiles virtual machine code into
PDP-10 lap code, for development purposes, is nearly complete.  We feel
that the continuation of this work would be enormously beneficial not
only from the standpoint of providing a useful, portable INTERLISP, but
as a means of advancing our knowledge about achieving portability for
large software systems.
     A third example of the feasibility of portable LISP implementations
is the FRANZ LISP implementation running on VAX UNIX.  FRANZ was born
of a PDP-11 LISP system originally written by a pair of undergraduates
in the course of a few months.  As it grew, it picked up MACLISP and
LISP MACHINE LISP features; it currently supports a large part of the
MACSYMA system.  FRANZ LISP is written almost entirely in C; a small
part is written in VAX 11/780 assembler, and another part in LISP.
     Despite the fact that most of FRANZ is written in a high-level
language, its performance is adequate to be useful.  The VAX 780, of
course, is a fast and relatively expensive computer- it hardly
qualifies as the sort of personal machine that could be afforded by
a single user.  It is by no means clear, in fact, that FRANZ would
perform satisfactorily if brought, say, to a microprocessor-based
machine.  Nevertheless, FRANZ lends additional credibilitty to the
concept of portable implementations.
-------

          --------------------
Mail-From: ARPANET host UTAH-20 rcvd at 1-Apr-81 1800-PST
Date:  1 Apr 1981 1707-MST
From: Griss at UTAH-20 (Martin.Griss)
To: engelmore at USC-ISI
Subject: SYSTOOL.DOC

I will append to this a version (draft) of a summary of our current and planned
programming environment; this is rough, and meant for your information. I will try
to bring an improved form to the meeting. Tony and I will have a short summary
for remailing.
Utah Symbolic Computation Group                                      March 1981
Operating Note xx









               A Portable Standard LISP Programming Environment

                                      by

                                Martin L. Griss

                        Department of Computer Science
                              University of Utah
                          Salt Lake City, Utah 84112

                              Preliminary Version

                          Last Revision: 1 April 1981







                                   ABSTRACT

This  report describes the current and proposed programming environment that is
being developed upon the portable SYSLISP based Standard LISP at the University
of Utah.















Work supported in part by the  National  Science  Foundation  under  Grant  No.
MCS80-07034.
!PSL Environment                  1 April 1981                                 1


1. Introduction
  In  this  preliminary  report,  we describe the "Tools" that are, will be, or
perhaps should be, part of the complete  Portable  Standard  LISP  (PSL)  based
programming environment that is being developed at the University of Utah.  The
report will briefly mention the nature, purpose and status of each feature, and
where  possible,  refer  to  a  more  complete  reference.    Most of the tools
described below have been  run  on  one  or  more  current  implementations  of
Standard  LISP,  and  their  conversion  to run on the new PSL should be rather
simple.



1.1. Acknowledgement
  I would like to acknowledge the advice, and  software  contributions  to  the
SYSLISP/STDLISP  environment  of  a  large  number  of people who work with the
existing or new Standard LISP; many of these have contributed one or more tools
mentioned in the  following  sections:    E. Benson,  W. Galway,  A. C.  Hearn,
R. Kessler,  G. Maguire,  J. Marti,  D. Morrison,  A. Norman,  and J. Peterson.
Most of the Programming environment of  PSL  is  being  directly  adapted  from
Standard  LISP  programs  currently  running  on  a  variety  of  Standard LISP
implementations at Utah, and on machines accessible via the ARPA Net.



1.2. Goals of the Utah PSL Project
  The goal of the PSL project is to  produce  an  efficient  and  transportable
Standard LISP system that may be used to:

   a. experimentally  explore  a  variety  of  LISP  implementation issues
      (storage management, binding,..);

   b. effectively support  the  REDUCE  algebra  system  on  a  number  of
      machines;

   c. provide  the  same,  uniform, modern LISP programming environment on
      all of the machines that we  use  (DEC-20,  VAX  and  some  personal
      machine,  perhaps  68000 based), of the power and complexity of UCI-
      LISP or MACLISP, with some extensions and enhancements.

  The approach we have been using  is  to  write  the  entire  LISP  system  in
Standard  LISP (with extensions for dealing with machine words and operations),
and to bootstrap it to the desired target machine in two steps:

   a. Cross compile an appropriate kernel to the assembly language of  the
      target machine;

   b. Once  the  kernel is running, use a resident compiler and loader, or
      fast-loader, to build the rest of the system.

  We currently think of the extensions to Standard LISP as having  two  levels:
the  SYSLISP  level,  dealing  with  WORDS  and  BYTES  and machine operations,
enabling us to write essentially all of the kernel in Standard LISP;  and,  the
!PSL Environment                  1 April 1981                                 2


STDLISP level, incorporating all of the features that make Standard LISP into a
modern LISP.

  In  our  environment,  we  write  LISP  code using an ALGOL-like preprocessor
language, RLISP, that provides a number of  syntactic  niceties  that  we  find
convenient;  we  do  not  distinguish  LISP  from  RLISP,  and can mechanically
translate from one to the other in either direction.

2. The SYSLISP/STDLISP Kernel



2.1. Overview of SYSLISP
  SYSLISP [3]  is  a  BCPL-like  language,  couched  in  LISP  form,  providing
operations  on  machine  WORDs,  machine  BYTEs and LISP ITEMS (tagged objects,
packed into one or more words). The control structures, and definition language
are those of LISP, but the familiar PLUS2, TIMES2,  etc.  are  mapped  to  word
operations  WPLUS2, WTIMES2, etc.  SYSLISP handles static allocation of SYSLISP
variables and arrays and initial LISP symbols, permitting the  easy  definition
of  higher  level  STDLISP  functions,  and  storage  areas.  SYSLISP  provides
convenient compile time constants for handling strings, LISP symbols, etc.  The
SYSLISP compiler is based on the Portable Standard  LISP  Compiler  [10],  with
extensions  to  handle WORD level objects, and efficient (opencoded) WORD level
operations. Currently, SYSLISP  handles  BYTEs  through  explicit  packing  and
unpacking    operations   GETBYTE(word-address,byte-number)   /   PUTBYTE(word-
address,byte-number,byte-value), without the notion  of  byte  address;  it  is
planned  to  extend  SYSLISP  to  a  C-like  language,  adding  the appropriate
declarations and analysis of WORD/BYTE/Structure operations.

  STATUS: SYSLISP currently produces FORTRAN and DEC-20 machine  code  for  the
DEC-20;  some exploratory PDP-11 machine code has been produced, in preparation
for PDP-11/45 and VAX-750 implementations.



2.2. Overview of STDLISP
  STDLISP [11] is an extended Standard LISP [20]; in most cases the  extensions
are  in  the  form  of  additional or more powerful functions, or features. The
Standard LISP Report [20], as a specification of an  interchange  LISP  subset,
did  not  go  into  implementation details. The STDLISP manual does specify all
(important) implementation details, and all source code may  be  consulted  for
further detail. STDLISP currently provides the following facilities on the DEC-
20:

   a. Tagged Data Types: [Currently 18 bit address field, 9 bit tag field,
      9  bit  GC/RELOC  field;  plan  to go to DEC-20 extended addressing,
      using Rutgers Extended UCI-LISP ideas]

      ID                  name,value,property-list,function-cell
      INT                 small integer
      FIXNUM              full-word integer
      FLOATING            full-word  float  -  [Not  Yet  Implemented   in
!PSL Environment                  1 April 1981                                 3


                          current system]
      BIGNUM              arbitrary  precision  integer,  with  BIGNUM and
                          BIGBITS  operations.    [Standard  LISP   source
                          exists,  has  not been tested in current STDLISP
                          environment]
      PAIR                "cons" cell
      STRING              vector of characters,  with  indexing  and  sub-
                          string operations
      CODE                machine  code blocks [currently not collected or
                          relocated by garbage collector]
      WORDS               vector of words, not traced by GC
      VECTOR              vector of ITEMs, are traced by GC

   b. Compacting Garbage Collector [Currently uses single HEAP, and 9  bit
      GC/RELOC  field; will be replaced by multi-heap and bit-map GC/RELOC
      phase, or copying GC]

   c. Shallow Binding using a Binding Stack for old values  [Currently  No
      FUNARG,  but  an "efficient" implementation of Baker's Rerooting [2]
      scheme is underway]

   d. CATCH and THROW, used to implement ERROR/ERRORSET,  and  interpreted
      control structures such as PROG, WHILE, LOOP, etc.

   e. A  single  LAMBDA  type,  corresponding  to  the Interpreted form of
      Compiled code. APPLY(lambda-or-code,args), passes the args in a  set
      of  registers,  used  by  the STDLISP/SYSLISP compiler. Compiled and
      interpreted code is uniformly  called  through  the  function  cell,
      which  ALWAYS  contains  the  address of executable code, permitting
      fast compiled-compiled calls, without the FAST-LINK Kludges.

      All   argument   processing   of   the   various   SPREAD/EVAL/Macro
      combinations  (EXPR,  FEXPR,  MACRO, NEXPR), are attributes of an ID
      only, not of code  or  LAMBDA's.  This  permits  uniform,  efficient
      compilation.

   f. Compiler  and  LAP.  [Currently,  only  LAP  has  been tested on the
      FORTRAN DEC-20 system; loading the compiler will be  no  problem;  a
      FAST  loader  will  also  be  implemented,  using a Portable LAP/FAP
      package [6]]

  STATUS:  A  complete  FORTRAN-based  Standard  LISP  interpreter   is   fully
operational  on  the  DEC-20,  compiled  by  the SYSLISP->FORTRAN compiler. The
source consists of 250 lines of DEC-20 macro (actually FAIL) for some I/O, JSYS
and byte primitives, and the rest consists of about 5000 lines of  SYSLISP  and
RLISP  code.    This  interpreter  is  used to debug and refine the sources and
selection of primitives (such as improved DEBUG, TRACE, LAP etc), and  will  be
used  as  a bootstrap aid. The DEC-20 machine code version is in the process of
being built, and is expected to take a few days to become operational (the only
change from the FORTRAN version is in the recoding of the SYSLISP c-macros, and
the 250 lines of FAIL).
!PSL Environment                  1 April 1981                                 4


  The  next  step  will  be  either  to  bring up an extended addressing DEC-20
Standard LISP, using essentially the same c-macros, and some additional  kernel
code  being  developed  at Rutgers for an extended addressing R/UCI LISP on the
DEC-20 (ref. C. Hedrick), or to begin a VAX-750 implementation.



2.3. The Compiler and Loader
  STDLISP will provide an efficient  machine  code  compiler,  small  and  fast
enough to be used as a resident compiler with resident LAP, and certainly as an
out-of-line  compiler.  This is exactly the same STDLISP/SYSLISP compiler being
used to compile the kernel to machine code, and will be trivially  modified  to
emit  LAP.  The  compiler  is  an  extension  of  the  Portable  LISP  Compiler
(PLC) [10]; the PLC has been used with great success for  Standard  LISP  on  a
large  number  of  machines (IBM 360/370, UNIVAC 1108, CDC 6600/7600, Burroughs
6700/6800, DEC-10/DEC-20), using either an existing LISP  modified  to  support
Standard  LISP,  or  a  newly  written Standard LISP (using BCPL, SDL, ALGOL or
FORTRAN to handcode the kernel). In most cases, the existing PLC and LAP  could
be  used  almost  unchanged for a new implementation of STDLISP. The extensions
are mainly those to support SYSLISP, and  to  provide  greater  efficiency  for
OPENCODED  arithmetic;  block compilers based on the PLC have been developed by
Hearn, and by Norman. The PLC is also being used by Rutgers as a  compiler  for
their extended addressing R/UCI-LISP for the DEC-20.

  The  PLC  uses a register model, with arguments passed in registers, and some
temporaries (and  some  arguments)  saved  in  a  stack  frame;  most  compiled
functions  use  arguments  and  local variables only lexically, and the name is
usually compiled away; a declaration FLUID is required to  invoke  the  shallow
binding  model.  Many  functions  do not actually have to use the stack at all.
This leads to very efficient code on the machines  that  we  have  explored.  A
stack version of this compiler could be developed, and may be needed if (when?)
the  correct  handling  of  interrupts  is addressed. The PLC emits its code in
terms of a set of c-macros, which can fairly easily implemented on conventional
register  machines;  additional  control  of  the  output  code  is  gained  by
parameters set up for the compiler in a configuration file.

  Currently,  the c-macro loader (LAP), and binary loader (FAP), are based on a
variety of ad-hoc loaders that have been written for the various  machines  and
adapted  for  new machines. Frick [6] has written a general purpose LAP and FAP
in a much more portable fashion (using  a  set  of  configuring  parameters  to
describe  the  kind  of target machine), and it is planned to adopt this as the
basic LAP/FAP package when the STDLISP kernel is stable.

3. Overview of the Tool Philosophy
  Upon the above base, we will provide a number of programming tools, that have
been, or will be  developed  or  adapted  from  other  LISPs  (similar  to  the
adaptation  of some InterLISP [29] tools into UCI-LISP [4]).  We plan to follow
the UNIX [25] and RATFOR Software Tools [17] approach as far  as  possible,  by
having  a  number  of  small,  self-contained  tools, each well documented, and
hopefully usable independently,  or  in  concert,  without  too  much  required
knowledge  about  other tools. We will probably provide an interface to the new
RATFOR tools [28] (recently bootstrapped onto our DEC-20) by using  a  Standard
!PSL Environment                  1 April 1981                                 5


LISP based RATFOR "shell", or even a RATFOR-> SYSLISP translator.

  {Here  we should contrast InterLISP, MDL, UCI-LISP, LISPM and RATFOR/UNIX/PWB
approaches}

  In the following sub-sections we will briefly describe each of the "tools" or
"tool-packages".



3.1. Language Tools

RLISP parser        RLISP is an extensible ALGOL-like language found to be more
                    convenient to people working in algebraic  language  areas,
                    particularly  computer  algebra.  The  current parser is an
                    "ad-hoc" top-down recursive descent  parser.  We  have  two
                    alternative  RLISP  parsers, one of which might be adopted:
                    one is based on a general Pratt parser  [24, 22],  and  the
                    other  on  the META/LISP compiler [19, 21, 18].  All of our
                    code is written in RLISP.

META and MINI       These are two compiler generator systems, accepting a  BNF-
                    like  description  of  the  language, producing a LISP-like
                    parse tree, which is then further translated or transformed
                    using a pattern matcher.  MINI [18] is smaller  and  faster
                    than  META  [19, 21] but has only a subset of the features.
                    In particular, MINI does not  have  BACKUP,  output  format
                    statements,  or as powerful an error handling capability as
                    META.  [Current  work  includes  optimizing   the   pattern
                    matcher,   improving   the  error  handler  interface,  and
                    defining one or more table driven scanners  that  can  more
                    effectively support both MINI/META and STDLISP]

Parser              This is a version of a Pratt Parser [24, 22], an extensible
                    top-down  parser  using RIGHT-LEFT operator precedences and
                    special functions for more complex structures.

RLISP pretty printers
                    RPRINT and STRUCT are programs  to  convert  Standard  LISP
                    back  into RLISP; also used to "tidy" older code, inserting
                    more structured WHILE, REPEAT, FOREACH type loops in  place
                    of PROG/GOTO combinations.

LISP pretty printers
                    We  have  table  driven  programs [16] to "grind" LISP code
                    into a nicely indented form.
!PSL Environment                  1 April 1981                                 6


3.2. Algebra and High Precision Arithmetic

REDUCE              REDUCE  is  a  complete  computer algebra system [14], that
                    runs upon Standard LISP. This system is  used  stand  alone
                    for   algebraic   manipulation   of  polynomials,  rational
                    functions and general expressions,  including  derivatives,
                    integration, pattern matching, symbolic matrices, etc. Some
                    projects have (or could) use REDUCE as part of more complex
                    systems,    such    as    program   verification,   program
                    transformation, computer aided geometric design, and VLSI.

BIG-FLOAT           This is a  general  purpose  arbitrary  precision  floating
                    point  package,  built upon the arbitrary precision integer
                    package [26, 27].



3.3. Debugging Tools

DEBUG               DEBUG is very portable package of  functions  for  tracing,
                    breaking  and  embedding functions [23]. Facilities include
                    the (conditional) tracing of function calls and interpreted
                    SETQs;  selective   backtrace;   embedding   functions   to
                    selectively  insert pre- and post- actions, and conditions;
                    primitive statistics gathering; generation of simple  stubs
                    (print  their  name  and  argument,  and  read  a  value to
                    return); and, print circular and re-entrant lists.



3.4. Editors
  Most of our users have  used  the  existing  SYSTEM  editor  to  prepare  and
maintain  their code; we have provided STDLISP JSYS/FORK calls on the DEC-20 to
rapidly get in and out of the major editors. We also  have  some  LISP  "incore
editors":

EDIT                A simple line-oriented editor based on SOS/EDIT for editing
                    RLISP/REDUCE   and  some  LISP  input;  mostly  for  people
                    familiar with these editors.

EDITOR              A simple LISP structure editor based on the  InterLISP/UCI-
                    LISP structure editor.

EMID                A  multi-window, multi-buffer EMACS-like screen editor [1].
                    This is planned to be the major interface  to  the  STDLISP
                    system,  and  will have convenient commands (MODES) to edit
                    LISP and RLISP, examine LISP documentation and convert LISP
                    and RLISP to and from other  convenient  forms.  There  are
                    "autoparen"  modes  in  which  an  expression  typed into a
                    buffer automatically EVALs as soon  as  the  expression  is
                    complete.    EMID  has  also  been  used  to experimentally
                    develop a VLSI SLA editor (SLATE) [5] and will be  used  to
!PSL Environment                  1 April 1981                                 7


                    do  algebraic  expression "surgery".  [Currently, EMID runs
                    on the DEC-20 LISP 1.6 based Standard LISP,  and  can  only
                    drive   a   Teleray  terminal;  it  will  be  converted  to
                    SYSLISP/STDLISP during the following months]



3.5. Source Code Control

CREF                CREF processes a number of source files,  cross-referencing
                    the   functions   and   Global  variables  used;  gives  an
                    indication of where each function is defined or  redefined,
                    its type (EXPR, FEXPR, etc), the functions and variables it
                    uses,  various undefined functions and variables, and other
                    statistics that can be selected or  deselected  under  flag
                    control [12].



3.6. Documentation Tools

HELP                HELP   will  display  short  text  descriptions  for  major
                    functions on request; by reading a documentation data base,
                    and should also display an activity based  HELP-TEXT  (e.g.
                    in response to ? at appropriate points).

MANUAL              MANUAL  produces a complete reference manual from (selected
                    portions?) of the HELP/MANUAL data base

  [Both HELP and MANUAL require a considerable amount of work in the conversion
and writing of pieces of text; perhaps these can be generated directly from the
source code by the including of special comments: %H %D etc; we  also  need  to
co-ordinate with the SCRIBE sources for the various documents already written]

4. Other Planned Facilities in PSL



4.1. Mode Analyzing RLISP/REDUCE
  MODE-REDUCE
MODE-REDUCE  is  an  ALGOL-68  or PASCAL like interface to Standard LISP, which
provides an additional MODE analysis pass after parsing,  to  rebind  "generic"
function names to "specific" functions, based on the declared or analysed MODEs
of  arguments. The system includes a variety of MODE generators (STRUCT, UNION,
etc). [15, 9, 13] We plan to reimplement this  system  to  use  SYSLISP/STDLISP
more effectively. We will also make the MODE-ANALYSIS phase part of SYSLISP, so
that WORDS, BYTES, ITEMS etc. can co-exist more naturally.
!PSL Environment                  1 April 1981                                 8


4.2. Funarg, Closures and Stack Groups
  Currently  we  are implementing a variant of Baker's [2] re-rooting scheme to
work well in  the  shallow  binding  environment;  we  expect  that  non-funarg
compiled  code  will  run  essentially as fast as in LISP 1.6. Context switches
will be more expensive.

  We may also implement some form of Stack Group, as done by the  LISP  machine
group [8, 30], to provide faster large context switch.



4.3. Packages and Modules
  We  will implement some form of multiple name space, compatible with our mode
system, and ideas developed by the LISP machine group.  This is quite important
to restrict access to all of the low-level system function, all  of  which  are
callable from LISP.

5. Other Planned Tools



5.1. Interface or translation of latest RATFOR Tools system
  Mini-PCL  or  "shell" written in LISP to permit rapid access to RATFOR tools,
or conversion  of  selected  tools  to  Standard  LISP  (using  RATFOR->SYSLISP
translator).

  Adaptation  of  some  of  the  Software  Tools  Primitives for use as STDLISP
primitives.



5.2. Adaptation of some InterLISP and UCI-LISP tools
  {Most likely the DEFSTRUCT package and the extended BREAK package. Our  users
will probably not use the structure editor, but rather use an EMACS fork on the
DEC-20, or hopefully EMID}.



5.3. Graphics
  LISP based graphics package for CAGD, plotting etc. Based on experiments with
LISP  based PictureBALM2 [7] on Tektronix, Evans and Sutherland Picture System.
Make the EMID window handler available as a general "piece-of-paper" facility.



5.4. Miscellaneous
  Source Code Update/Downdate Program to maintain a  baseline  with  correction
"decks"
!PSL Environment                  1 April 1981                                 9


6. References

[1]   Armantrout, R.; Benson, E.; Galway, W.; and Griss, M. L.
      EMID: A Multi-Window Screen Editor Written in Standard LISP.
      Utah Symbolic Computation Group, Operating Note No. xx, University of
         Utah, Computer Science Department, Jan, 1981.

[2]   Baker, H. G.
      Shallow Binding in LISP 1.5.
      CACM 21(7):565, July, 1978.

[3]   Benson, E. and Griss, M. L.
      SYSLISP: A portable LISP based systems implementation language.
      Utah Symbolic Computation Group, Report UCP-xx, University of Utah,
         February, 1981.

[4]   Bobrow, R. J.; Burton, R. R.; Jacobs, J. M.; and Lewis, D.
      UCI LISP MANUAL (revised).
      Online Manual RS:UCLSP.MAN, University of California, Irvine, ??, 1976.

[5]   Carter, T.; Goates, G.; Griss, M.L.; and Haslam, R.
      SLATE: A Lisp Based EMACS Like Text Editor for SLA Design.
      Utah Symbolic Computation Group, Operating Note  CS566 , University of
         Utah, Computer Science Department, Jan, 1981.

[6]   Frick, I. B.
      A Portable Lap and Binary Loader.
      Utah Symbolic Computation Group Operating Note No. 52, University of
         Utah, November, 1979.

[7]   Goates, G. B. , M. L. Griss, and G. J.  Herron.
      PICTUREBALM: A LISP-based Graphics Language with Flexible Syntax and
         Hierachical Data Structure.
      In Proceedings of SIGGRAPH-80, Computer Graphics, pages 93-99.  ACM,
         1980.

[8]   Greenblatt, R.
      The LISP Machine.
      Technical Report ?, MIT, August, 1975.

[9]   Griss, M. L.
      The Definition and Use of Data-Structures in Reduce.
      In Proceedings of SYMSAC 76, pages 53-59.  SYMSAC, August, 1976.

[10]  Griss, M. L. and Hearn, A. C.
      A Portable LISP Compiler.
      Utah Symbolic Computation Group, Report  UCP-76, University of Utah,
         June, 1979.
      (To be published in Software Practice and Experience).
!PSL Environment                  1 April 1981                                10


[11]  Griss, M. L.
      The Portable Standard LISP Users Manual.
      Utah Symbolic Computation Group, TR- xx, University of Utah, March, 1981.

[12]  Griss, M. L.
      RCREF:  An Efficient REDUCE and LISP Cross-Reference Program.
      Utah Symbolic Computation Group, Operating Note No. 30, , ??, 1977.

[13]  Griss, Martin L.; Hearn, A. C; and Maguire, G. Q., Jr.
      Using The MODE Analyzing version of REDUCE.
      Utah Symbolic Computation Group, Operating Note No. 48, Dept of CS, U of
         U, Jun, 1980.

[14]  Hearn, A. C.
      REDUCE 2 Users Manual.
      Utah Symbolic Computation Group UCP-19, University of Utah, 1973.

[15]  Hearn, A. C.
      A Mode Analyzing Algebraic Manipulation Program.
      In Proceedings of ACM 74, pages 722-724.  ACM, New York, New York, 1974.

[16]  Hearn, Anthony C.; and Norman, Arthur C.
      A One-Pass Prettyprinter.
      Utah Symbolic Computation Group, Report  UCP-75, Dept of CS, U of U, May,
         1979.

[17]  Kernighan, B. W. and Plauger, P. J.
      Software Tools.
      Addison-Wesley, Reading, Mass., 1976.

[18]  Kessler, R. R.
      PMETA - Pattern Matching META/REDUCE.
      Utah SYmbolic Computation Group, Operating Note No. 40, University of
         Utah, January, 1979.

[19]  Marti, J. B.
      The META/REDUCE Translator Writing System.
      SIGPLAN Notices 13(10):42-49, 1978.

[20]  Marti, J. B., et al.
      Standard LISP Report.
      SIGPLAN Notices 14(10):48-68, October, 1979.

[21]  Marti, J. B.
      A Concurrent Processing for LISP.
      PhD thesis, University of Utah, Jun, 1980.

[22]  Nordstrom, M.
      A Parsing Technique.
      Utah Computational Physics Group Operating Note 12, University of Utah,
         November, 1973.
!PSL Environment                  1 April 1981                                11


[23]  Norman, A.C. and Morrison, D. F.
      The REDUCE Debugging Package.
      Utah Symbolic Computation Group, Operating Note No. 49, Dept of CS, U of
         U, Feb, 1981.

[24]  Pratt, V.
      Top Down Operator Precedence.
      In Proceedings of POPL-1 ?, pages ??-??.  ACM, ??

[25]  Ritchie, D. M., and Thompson, K.
      The UNIX Time-Sharing System.
      CACM 17(7):365-376, July, 1974.

[26]  Sasaki, T.
      An Arbitrary Precision Real Arithmetic Package in REDUCE.
      Utah Symbolic Computation Group, Report  ucp-68, Dept of CS, U of U, Apr,
         1979.

[27]  Sasaki, T.
      Manual for Arbitrary Precision Real Arithmetic System in REDUCE.
      Utah Symbolic Computation Group, Technical Report  8, Dept of CS, U of U,
         May, 1979.

[28]  Scherrer, Debbie.
      Software Tools Programmer's Manual.
      Advanced Systems Research Group, Report LBID 097, Lawrence Berkeley
         Laboratory, Computer Science and Applied Mathematics Department,
         Lawrence Berkeley Laboratory, Berkeley, CA 94720, Jan, 1981.

[29]  Teitelman, W.; et al.
      Interlisp Reference Manual, (3rd Revision).
      Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,Calif.
         94304, 1978.

[30]  Weinreb, D. and Moon, D.
      LISP Machine Manual.
      Manual  , M. I. T., January, 1979.
      second preliminary version.
!PSL Environment                  1 April 1981                                 i


                               Table of Contents
1. Introduction                                                               1
     1.1. Acknowledgement                                                     1
     1.2. Goals of the Utah PSL Project                                       1
2. The SYSLISP/STDLISP Kernel                                                 2
     2.1. Overview of SYSLISP                                                 2
     2.2. Overview of STDLISP                                                 2
     2.3. The Compiler and Loader                                             4
3. Overview of the Tool Philosophy                                            4
     3.1. Language Tools                                                      5
     3.2. Algebra and High Precision Arithmetic                               6
     3.3. Debugging Tools                                                     6
     3.4. Editors                                                             6
     3.5. Source Code Control                                                 7
     3.6. Documentation Tools                                                 7
4. Other Planned Facilities in PSL                                            7
     4.1. Mode Analyzing RLISP/REDUCE                                         7
     4.2. Funarg, Closures and Stack Groups                                   8
     4.3. Packages and Modules                                                8
5. Other Planned Tools                                                        8
     5.1. Interface or translation of latest RATFOR Tools system              8
     5.2. Adaptation of some InterLISP and UCI-LISP tools                     8
     5.3. Graphics                                                            8
     5.4. Miscellaneous                                                       8
6. References                                                                 9
-------

          --------------------
End forwarded messages
		

∂02-Apr-81  1617	CLR at MIT-XX 	Status of MDL Project    
Date:  2 Apr 1981 1810-EST
From: CLR at MIT-XX
Subject: Status of MDL Project
To: Englemore at USC-ISI
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  2 Apr 1981

Status Report:	MDL	(Chris Reeve & Marc Blank)


1.	Description of Project

	We are implementing a machine independent version of the MDL
language.  MDL is a LISP-like language that was initially developed in
the early 70s.  The current MDL runs on ITS, TOPS-20 and TENEX operating
systems.  Like most LISP systems developed in the late 60s/early 70s, the
current MDL interpreter is written in assembly language.  As in other cases,
this has led to problems in moving MDL to other machines and has made
modification/maintenance of the interpreter difficult.

	The approach being taken to building the machine independent version
of MDL has been to define a virtual MDL machine.  This machine has about
100 relatively high-level instructions.  Example instructions are things
like NTHL to NTH a LIST, RESTV to REST a VECTOR, ADD to add numbers etc.
The virtual machine is byte oriented in that it defines MDL objects in terms
of 8 bit bytes.  This approach was chosen to make porting to the many
currently available personal machines easier.

	The approach to building MDL in this virtual machine environment
has been to write the MDL interpreter in MDL and modify the MDL compiler
to produce virtual machine code.  The virtual machine code can then either
be interpreted or compiled for a given target machine.

2.	Distinguishing Features of the MDL Language and Environment

	MDL is a language that bridges the gap between LISP languages and
algebraic languages.  From the LISP world MDL includes the following
features:

	a.	A highly interactive environment for program development and
		program debugging.

	b.	Programs are data.

	c.	What-you-see-is-what-you-get.  All MDL data structures have
		well defined printing representations.

	d.	Garbage collection.

	e.	Easy intermixing of compiled and interpretive code.

	f.	Very flexible argument list specifications for FUNCTIONS.

From the algebraic language world MDL has the following features:

	a.	An integrated variable declaration mechanism to aid debugging
		and compilation.

	b.	A large number of built-in datatypes including FIX, FLOAT,
		CHARACTER, LIST, VECTOR, STRING, FORM, UVECTOR (uniform
		vector), ATOM etc.

	c.	User-defined datatypes based on the built-in primitive types
		or based on arbitrary record definitions.

The MDL environment includes:

	a.	A structure oriented editor and an ASCII editor (EMACS-like)
		that knows about MDL structures.

	b.	A compiler that minimizes differences between compiled and
		interpreted code.  SPECIAL checking can be done with
		interpreted code to aid this. 

	c.	A package/library system that simplifies sharing programs among
		users and mimimizes naming conflicts.

	d.	An & printer facilitates printing long and circular structures.

	e.	A fairly standard pretty printer.

	f.	A cross reference lister.

	g.	Debugging tools including breakpoints, tracing, one-stepping
		and monitoring read and/or write access to MDL objects.

	h.	A program called GLUE to bind compiled functions together to
		eliminate calling overhead.

	i.	A pure dumper to put compiled programs in a mapped overlay
		area out of the way of the garbage collector and able to be
		shared.

	j.	A purification system to move "pure" structures out of the
		garbage collectors way.

3.	Operational?

	PDP-10/TOPS-20 MDL has been operational for about 8 years.  The latest
version of TOPS-20 MDL uses extended addressing for overlayed compiled code.
MDL creates a number of additional sections and maps garbage collected space
into each of them.  It then allows different pieces of compiled code to be
mapped into each section.

	Machine independent MDL has been on the 20 for a few weeks now.  We
are still shaking bugs out of it.  We have run most of the interpreter in it.
This MDL includes a MDL compiler that produces virtual machine code, an open
compiler that translates virtual machine code into TOPS-20 instructions, and
virtual machine interpreter.  The system will be capable of using the 20's
extended addressing space for all structures and code.  Use of the extended
addressing is being delayed until everything works in non-extended mode since
our machine language debugger doesn't understand extended mode.  However, we
anticipate no problems getting things to work in extended mode since we have
learned a lot by modifying "old" MDL. 

4.	Schedule

	We are going to have Apollo Domains very shortly and we plan to port
MDL to the Domain.  By September 1981 we will have a complete MDL system based
on machine independent MDL on the 20 and the Domain.  By the end of calendar
1981 both of these systems will have achieved the same level of reliability
and robustness available on the current MDL.  We intend to bring up MDL on
the VAX by mid 1982.
-------

∂02-Apr-81  1618	Feldman@SUMEX-AIM 	Lisp PLITS status report  
Date:  2 Apr 1981 1433-PST
From: Feldman@SUMEX-AIM
Subject: Lisp PLITS status report
To:   engelmore@USC-ISI
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  2 Apr 1981

                         The PLITS Project
                           
                    J.A. Feldman and G.W. Cottrell      


                We are involved in providing and using a uniform structuring

        device for distributed systems which can be implemented in many 

        languages. We plan to overlay the PLITS language model [Feldman,

        1979] onto several languages and across several machines. The basic 

        idea of PLITS is that a user job can be comprised of modules written in

        various languages and executing on various machines. The Unix IPC

        [Rashid,1980] and it's extension to our local network form part of the

        system base for PLITS.

                Our intention is to use this as a basis for implementing     

        distributed job management. A user would be able to start and control a

        job from any site in the network. The distributed job manager and it's

        agents will be able to control and clean up processes running on

        any machine. The PLITS model of message slots has been extended from 

        name-value pairs to name-type-value triples [Low,1980] as a way of 

        structuring inter-machine and inter-language communication. We think

        with this mechanism we can provide automatic data conversion between

        languages. We are using Franz-Lisp as one of the target languages 

        for our implementation If successful, the project will facilitate 

        distributed computation in Franz-Lisp as well as the easy incorporation

        of non-Lisp modules. We plan to implement a natural language dialogue  
  
        system with the Lisp-PLITS package, and are currently working

        on an image understanding system which will have the low level image

        processing written in C, and the intelligence in Lisp.

                Our environment includes Franz-Lisp, PASCAL, and C on VAX/Unix,

        and other systems not of direct relevance here.

                An experimental system is operational on the Vax which

        supports an earlier version of the Lisp-PLITS design of message    

        primitives. Message primitives in this system look like:

                (send (who 'parser) (about 'NP1) (msg 's-expr))

        Sends an s-expression to the parser module about a transaction

        which has been opened earlier.

                (receive (who 'knowledgebase) (about 'NP1))

        Receives a message about a transaction. The received message is

        obtained through access functions.

                Design specifications, as noted above, are still being         
    
        finalized. We expect the Lisp-PLITS version to be running with  
        
        the full message passing facility, including a message tracing         
      
        mechanism, before the summer.                                      


        1. Feldman, J.A. High Level Programming For Distributed Computing.
                Comm. ACM 22, 6 (June 1979), 353-368.

        2. Low, J.R. Name-Type-Value (NTV) Protocol Draft Proposal. TR 73,
                Computer Science Dept., U. of Rochester, Rochester, N.Y.,
                July 1980

        3. Rashid, R. An Inter-Process Communication Facility for UNIX.
                CMU-CS-80-124, Dept. of Computer Science, Carnegie-Mellon
                University, Pittsburgh, Pa. March 1980









-------

∂02-Apr-81  1617	BALZER at USC-ISIB 	INTERLISP-VAX STATUS REPORT   
Date:  2 Apr 1981 1548-PST
From: BALZER at USC-ISIB
Subject: INTERLISP-VAX STATUS REPORT
To:   ENGELMORE at ISI
cc:   BALZER
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  2 Apr 1981

1. Describe your project:
   Produce large address space Interlisp for the Vax. Secondary objective 
   is to port it to other machines.

2. What are the distinguishing features of your language and/or programming
   enviornment?
   NONE. It will be a fully compatible Interlisp(as currently running on
   DEC 10's and 20's, and on Xerox D-0's and Durado's). Such compatibility
   is modulo the normal arithmetic precision issues. File system compatibility
   as developed by Xerox Parc for the D-0 and Durado will be maintained.
   Operating system interface(JYSYS) not maintained compatibily.

3. Is you system operational? If yes, on what hardware? if no, when do
   you expect to be operational, and on what?
   Interlisp-Vax is not operational. Planned release date is March 1982 on Vax
   running under Unix. The March 1982 version is expected to be fully 
   functional. Additional releases will address tuning issues. 

   Intermediate milestone: June 1981-operational version of Interlisp kernal
   which will allow programs not dependent on the Interlisp environment to be
   tested. Remainder of project will be devoted to getting Interlisp 
   environment to run on this kernal.

4. What are your present plans for futher development? Include estimated
   milestone dates, if possible:
   None.
-------

∂03-Apr-81  0554	SHEIL at PARC-MAXC 	Status report on Interlisp-D  
Date:  3 APR 1981 0042-PST
From: SHEIL at PARC-MAXC
Subject: Status report on Interlisp-D
To:   englemore at USC-ISI
cc:   masinter
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  3 Apr 1981


                   Status report on Interlisp-D

1. Project description

The Interlisp-D project was formed to develop a personal machine
implementation of Interlisp for use as an environment for research
in artificial intelligence and cognitive science.  A principal aim
was to maintain complete upward compatibility with other Interlisp
implementations, such as Interlisp-10 on the DEC PDP-10, so as both
to preserve the considerable quantity of existing Interlisp
software and to allow the users of the new environment to share
software with other researchers.  The Interlisp-D implementation
has been carried out within the Xerox Palo Alto Research Center,
where Interlisp-D is currently one of the principal computational
environments used for artificial intelligence research.

2. Principal characteristics of Interlisp-D

Interlisp is a dialect of Lisp whose most striking feature is a
very extensive set of user facilities, including syntax extension,
error correction, and type declarations.  It has been in wide use
on a variety of time shared machines over the past ten years.  An
overview of the Interlisp programming environment can be found in
[Teitelman & Masinter, 1981].

Interlisp-D is an implementation of the Interlisp programming
system for the Dolphin and Dorado personal computers.  It is
complete and totally upward compatible with the widely used PDP-10
version.  All the Interlisp system software documented in the
Interlisp Reference Manual [Teitelman et al., 78] runs under
Interlisp-D, excepting only a few capabilities explicitly indicated
in that manual as applicable only to Interlisp-10.  The
completeness of the implementation has been demonstrated by the
fact that several very large, independently developed, application
systems have been brought up in Interlisp-D with little or no
modification.  Examples include the Mycin system for infectious
disease diagnosis [Shortliffe, 76], the KLONE knowledge
representation language [Brachman, 78] and the West tutoring system
[Burton & Brown, 78].

In addition to the standard Interlisp software, Interlisp-D
provides some new facilities to enable the Interlisp user to
exploit the personal computing environment.  These include a
complete set of raster scan graphics operations (documented in
[Burton, 1980]) and Xerox Pup style Ethernet software.  The latter
includes both a low level interface and a collection of higher
level protocols, including those used for communication with Xerox
printing and file servers.

Both the internal structure of Interlisp-D and an account of its
development are presented in [Burton, 1980].  Briefly, Interlisp-D
uses a byte-coded instruction set, deep binding, CDR encoding (in a
32 bit CONS cell) and incremental, reference counted garbage
collection.  All of these, as well as other performance critical
operations, have direct microcode support.  The use of deep
binding, together with a complete implementation of spaghetti
stacks, allows very rapid context switching for both system and
user processes.

The Interlisp-D system is written almost entirely in Lisp.  A
relatively small amount of microcode implements the Interlisp-D
instruction set and a smaller amount of systems implementation code
interfaces Interlisp-D to the lower levels of the operating system.
Virtually all of the latter is being absorbed into Lisp, in the
interests of transportability.  Currently, the initial system
contains approximately 3M bytes of code and data structures.

3. Host hardware characteristics

The Dolphin is a medium sized personal computer, physically similar
to a Xerox Alto.  The Dorado is a larger, faster machine.  Both are
microprogrammed, with 16-bit data paths and relatively large main
memories (~1 megabyte) and virtual address spaces (4M-16M 16 bit
words).  Both machines have a medium sized local disk, Ethernet
controller, a large raster scanned display and a standard Alto
keyboard with 64 unencoded keys and a "mouse" pointing device.  

4. Implementation status

Interlisp-D is the culmination of a series of developments that
began with the AltoLisp project in 1973.  While these earlier
systems successfully ran many Interlisp programs, the initial
implementation of Interlisp-D could perhaps most accurately be
considered to have been completed in Spring 1980, at which time it
was first made available to users.  Extensive performance tuning,
debugging and extension has been underway on a ongoing basis since
then.  The system is in active use by researchers (other than its
implementors) at both Xerox PARC and Stanford University.

5. Current activities and future developments

As Interlisp-D is being actively used by several research projects
at PARC and elsewhere, we anticipate that it will continue both to
grow and to be maintained for at least the next few years.  In the
near term, development will continue to focus on reliability,
performance, transportability, and functionality.

Reliability

As Interlisp-D supports a growing user community, reliability has
become a very high priority.  A considerable amount of effort has
been invested in diagnostics, stress testing the system, and
tracking down subtle interaction errors.  However, as Interlisp-D
is now approaching the level of stability and reliability of
Interlisp-10, we expect that this activity, while still high
priority, will require less investment.

Performance

Performance tuning a large Lisp system is distinctly non-trivial.
We have invested considerable effort, including the development of
several performance analysis tools, on the performance of
Interlisp-D and we expect to continue to do so.  Although overall
performance estimates can be misleading, Interlisp-D on the Dolphin
currently seems to be slightly superior to Interlisp-10 on an
otherwise unloaded PDP KA-10.  Although this level of performance
makes the Dolphin a comfortable personal working environment, we
have identified a number of improvements which we anticipate will
improve execution speed by between 20% and 100%.

Transportability

Another major thrust is the reduction of dependencies on specific
features of the present environment, so as to facilitate
Interlisp-D's implementation on other hardware.  Dependencies on
the operating system are being removed by absorbing much of the
higher (generally machine independent) facilities provided by the
operating system into Lisp code.  Gratuitous dependencies on
attributes of the hardware, such as the 16-bit word size, are being
removed and inherent ones isolated.  In addition to an abstract
desire for transportability, our efforts to share code with the
Interlisp-VAX and Interlisp-Jericho projects have provided on-going
encouragement in this direction.

Functionality

The principal planned extensions to the system's functionality
involve the extensions to Interlisp required for it to fully
exploit a personal computing environment.  The most significant of
these are further development of the graphics and network
facilities and their integration into Interlisp's rich collection
of programming support tools.  While radical changes to the
underlying language structures are made difficult by our
requirement of Interlisp compatibility, we also expect to make some
language extensions, including some form of object oriented
procedure invocation.


REFERENCES

 Brachman, R. et al.
      KLONE Reference Manual.  BBN Report No. 3848, 1978.

 Burton, R. and Brown, J.
      An investigation of computer coaching for informal learning
      activities.  International Journal of Man-Machine Studies,
      11, 1979, 5-24.

 Burton, R. et al.
      Papers on Interlisp-D. Xerox PARC report, SSL-80-4, 1980.

 Shortliffe, E.  Computer-based medical consultations.  American
      Elsevier, 1976.

 Teitelman, W. et al.  The Interlisp reference manual.  Xerox
      PARC, 1978.

 Teitelman, W. and Masinter, L. The Interlisp Programming
      Environment. IEEE Computer, 14:4, April, 1981, pp. 25-34.

-------

∂03-Apr-81  1205	Griss at UTAH-20 (Martin.Griss) 	Standard LISP Report  
Date:  3 Apr 1981 0642-MST
From: Griss at UTAH-20 (Martin.Griss)
Subject: Standard LISP Report
To: engelmore at USC-ISI
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  3 Apr 1981

Bob:
	Attached is the final version of the Standard LISP project summary
that Tony Hearn and I have prepared for distribution to the group. 

	Martin
-----------------------------------------------





                           The Standard LISP Project

                                      by

                      Martin L. Griss, University of Utah

                                      and

                      Anthony C. Hearn, RAND Corporation


                          Last Revision: 3 April 1981

1. Overview of the Standard LISP Projects

  This note describes the current status and future plans for the Standard LISP
projects at the University of Utah and RAND.

  Standard  LISP  is  a  dialect  of  LISP  that has been used predominantly to
transport the REDUCE computer algebra system to a large variety of computer and
operating system configurations. Standard LISP was specified (in the  "Standard
LISP  Report") as an Interchange LISP that can be (and has been) implemented on
top of a number of existing LISP systems. It did NOT completely specify a  full
LISP,  although it has been used as the starting point for a number of new LISP
implementations.

  In addition to REDUCE, a number of tools that can run on Standard  LISP  have
been  implemented  and  distributed.  These include a Portable LISP Compiler, a
META/LISP compiler-compiler, and some debugging  and  cross-referencing  tools.
Most  of  the  tools  described  below  have  been  run  on one or more current
implementations of Standard LISP, and  their  conversion  to  run  on  the  new
Portable Standard LISP (or PSL) to be described later should be rather simple.

  Current work is directed at:

   - developing  a  more  complete specification of improving the Portable
     LISP compiler to  more  effectively  compile  LISP  code  with  block
     compilation and efficient open coded arithmetic (at RAND);

   - a  extended  Standard  LISP,  and providing a Portable implementation
     strategy (the PSL project, at Utah);

   - developing additional Standard LISP tools for  the  PSL  environment,
     and for the other Standard LISPs.

2. The Portable LISP Compiler

  The  Portable LISP Compiler (or PLC) is written entirely in Standard LISP, as
a rather small, efficient program. It has been used with great success for  the
compilation of Standard LISP on a large number of machines (IBM 360/370, UNIVAC
!                                       1


1108,  CDC 6600/7600, Burroughs 6700/6800, DEC-10/DEC-20, Z80), using either an
existing LISP modified to support Standard LISP, or a  newly  written  Standard
LISP (using BCPL, SDL, ALGOL or FORTRAN to hand code the kernel).

  The  PLC  uses a register model, with arguments passed in registers, and some
temporaries and arguments saved in a stack frame; most compiled  functions  use
arguments  and local variables only lexically, and the name is usually compiled
away; a declaration FLUID is required to invoke the shallow binding model. Many
functions do not actually have to use the stack at  all.  This  leads  to  very
efficient  code  on the machines that we have explored.  The PLC emits its code
in terms of a small set of abstract machine codes (or c-macros), which  can  be
efficiently  implemented  on conventional register machines; additional control
of the output code is gained by  parameters  set  up  for  the  compiler  in  a
configuration file.

  Currently,  the  c-macro  expander and loader (LAP), and binary loader (FAP),
are based on a variety of ad-hoc loaders that have been written for the various
machines and adapted for new machines. Frick has written a general purpose  LAP
and  FAP in a much more portable fashion (using a set of configuring parameters
to describe the kind of target machine), and it is planned to adopt this as the
Standard LISP LAP/FAP package.

  Current work being carried on at Rand on this project is directed at:

   a. Improved  compiler  macros.  We  have  now  had  several  years   of
      experience  with  the  portable  Lisp compiler, and as a result have
      refined  our  views  about  the  "perfect"  set  of  c-macros.  Some
      improvements  were  already suggested in the compiler paper, and the
      goal here is to implement them.  In  many  cases,  the  improvements
      reduce   the  dependence  on  scratchpad  registers,  and  make  the
      compiling of Lisp to another high level language more efficient.

   b. Fast arithmetic. The aim  here  is  to  support  in-line  arithmetic
      calculations  such  as those offered by Maclisp, but in a completely
      portable manner.  Specific instructions for the  in-line  arithmetic
      will  be  added to the system as a straightforward table, so that it
      is easy to move the model to another computer.

   c. Block compiling. Specific modules will be compiled as "black boxes",
      so that all internal linkages will be set up, and internal names not
      visible to the outside. The will achieve  the  same  effect  as  the
      Maclisp "package" without requiring kernel support for obarrays, for
      example.

   d. Stack  vs  Register studies. The current compiler uses registers for
      general function argument passing. The goal here is  to  modify  the
      compiler so that stack entries can also be used.

   e. Support  for  vector  compilation.  Table  driven  support  for  the
      efficient compilation of vectors will be provided.
!                                       2


3. The Portable Standard LISP System

  The  goal  of the Portable Standard LISP project (PSL) at UTAH, is to produce
an efficient and transportable extended Standard LISP system that may  be  used
to:

   a. produce an efficient and portable Standard LISP kernel;

   b. provide  the  same,  uniform, modern LISP programming environment on
      all of the machines that we  use  (DEC-20,  VAX  and  some  personal
      machine,  perhaps  68000 based), of the power and complexity of UCI-
      LISP or MACLISP, with some extensions and enhancements;

   c. experimentally explore  a  variety  of  LISP  implementation  issues
      (storage management, binding and so on);

   d. effectively  support  the  REDUCE  algebra  system  on  a  number of
      machines.

  The approach we have been using  is  to  write  the  entire  LISP  system  in
Standard  LISP  (with  extensions for dealing with machine words and bytes, and
operations), and to bootstrap it to the desired target machine in two steps:

   a. cross compile an appropriate kernel to the assembly language of  the
      target machine;

   b. once  the  kernel is running, use a resident compiler and loader, or
      fast-loader, to build the rest of the system.

  We currently think of the extensions to Standard LISP as having  two  levels:
the SYSLISP level, dealing with machine words and bytes and machine operations,
enabling  us  to write essentially all of the kernel in Standard LISP; and, the
STDLISP level, incorporating all of the features that make Standard LISP into a
modern LISP.

  In our environment, we write  LISP  code  using  an  ALGOL-like  preprocessor
language,  RLISP,  that  provides  a  number of syntactic niceties that we find
convenient; we do  not  distinguish  LISP  from  RLISP,  and  can  mechanically
translate from one to the other in either direction.



3.1. The SYSLISP Implementation Language

  SYSLISP  is  a BCPL-like language, couched in LISP form, providing operations
on machine words, machine bytes and LISP items (tagged objects, packed into one
or more words). The control structures, and definition language  are  those  of
LISP,  but  the  familiar  PLUS2,  TIMES2,  etc.  are mapped to word operations
WPLUS2, WTIMES2, etc.  SYSLISP handles static allocation of  SYSLISP  variables
and  arrays  and initial LISP symbols, permitting the easy definition of higher
level STDLISP functions, and storage areas. SYSLISP provides convenient compile
time constants for handling strings, LISP symbols, etc.  The  SYSLISP  compiler
!                                       3


is based on the Portable Standard LISP Compiler, with extensions to handle word
level  objects,  and  efficient  (open coded) word level operations. Currently,
SYSLISP  handles  bytes  through  explicit  packing  and  unpacking  operations
GETBYTE(word-address,byte-number)   and  PUTBYTE(word-address,byte-number,byte-
value), without the notion of byte address; it is planned to extend SYSLISP  to
a  C-like  language,  adding  the  appropriate  declarations  and  analysis  of
word/byte/structure operations.

  STATUS: SYSLISP currently produces FORTRAN and DEC-20 machine  code  for  the
DEC-20;  some exploratory PDP-11 machine code has been produced, in preparation
for PDP-11/45  and  VAX-750  implementations.  Some  other  exploratory  coding
producing PASCAL, and C, has been done.



3.2. The Portable STDLISP

  STDLISP is an extended Standard LISP; in most cases the extensions are in the
form  of  additional  or more powerful functions or features. The Standard LISP
Report, as a specification of an interchange  LISP  subset,  did  not  go  into
implementation   details.   The   STDLISP   manual  specifies  all  (important)
implementation details, and all  source  code  may  be  consulted  for  further
detail. STDLISP currently provides the following facilities on the DEC-20:

   a. Tagged  Data  Types:  [Currently  18  bit  address  field, 9 bit tag
      field],

      ID                  name,value,property-list,function-cell
      INT                 small integer
      FIXNUM              full-word integer
      FLOATING            full-word  float  -  [Not  Yet  Implemented   in
                          current system]
      BIGNUM              arbitrary  precision  integer,  with  BIGNUM and
                          BIGBITS  operations.    [Standard  LISP   source
                          exists,  but  has  not  been  tested  in current
                          STDLISP environment]
      PAIR                "cons" cell
      STRING              vector of characters,  with  indexing  and  sub-
                          string operations
      CODE                machine  code blocks [currently not collected or
                          relocated by garbage collector]
      WORDS               vector of words, not traced by GC
      VECTOR              vector of ITEMs, are traced by GC

   b. Compacting Garbage Collector [Currently uses single HEAP, and 9  bit
      GC/RELOC  field; will be replaced by multi-heap and bit-map GC/RELOC
      phase, or copying GC]

   c. Shallow Binding using a Binding Stack for old values.

   d. CATCH and THROW, used to implement ERROR/ERRORSET,  and  interpreted
      control structures such as PROG, WHILE, LOOP, etc.
!                                       4


   e. A  single  LAMBDA  type,  corresponding  to  the Interpreted form of
      Compiled code. APPLY(lambda-or-code,args), passes the args in a  set
      of  registers,  used  by  the STDLISP/SYSLISP compiler. Compiled and
      interpreted code is uniformly  called  through  the  function  cell,
      which  ALWAYS  contains  the  address of executable code, permitting
      fast compiled-compiled calls, without the FAST-LINK Kludges.

      All   argument   processing   of   the   various   SPREAD/EVAL/Macro
      combinations  (EXPR,  FEXPR,  MACRO, NEXPR), are attributes of an ID
      only, not of code  or  LAMBDA's.  This  permits  uniform,  efficient
      compilation.

   f. Compiler, LAP and FAST LOADER, based on the Portable LISP compiler.

   g. Currently we are implementing a variant of Baker's re-rooting scheme
      to work well in the shallow binding environment; we expect that non-
      funarg  compiled  code  will run essentially as fast as in LISP 1.6.
      Context switches will be more expensive.

   h. We may also implement some form of Stack Group, as done by the  LISP
      machine group, to provide faster large context switch.

   i. We  will implement some form of multiple name space, compatible with
      our mode system, and ideas developed  by  the  LISP  machine  group.
      This  is  quite important to restrict access to all of the low-level
      system function, all of which are callable from LISP.

  STATUS:  A  complete  FORTRAN-based  Standard  LISP  interpreter   is   fully
operational  on  the  DEC-20,  compiled  by  the SYSLISP->FORTRAN compiler. The
source consists of 250 lines of DEC-20 macro (actually FAIL) for some I/O, JSYS
and byte primitives, and the rest consists of about 5000 lines of  SYSLISP  and
RLISP  code.    This  interpreter  is  used to debug and refine the sources and
selection of primitives (such as improved DEBUG, TRACE, LAP etc), and  will  be
used as a bootstrap aid.

  A  DEC-20  machine  code  version  has  just  become operational, and will be
polished in the next few days.  (The only change from the FORTRAN version is in
the recoding of the SYSLISP c-macros, the use  of  a  more  efficient  function
linkage,  and  only  68  lines  of handcoded FAIL support). It is significantly
faster and smaller than the FORTRAN version.

  The next steps will be to bring up an  extended  addressing  DEC-20  Standard
LISP,  using  essentially  the  same  c-macros, and some additional kernel code
based on Hedricks ELISP project at Rutgers.   We  then  will  begin  a  VAX-750
implementation, and perhaps a 68000 implementation.



3.3. The Compiler and Loader

  STDLISP provides an efficient machine code compiler, small and fast enough to
be  used  as a resident compiler with resident LAP, and certainly as an out-of-
!                                       5


line  compiler. This is exactly the same STDLISP/SYSLISP compiler being used to
compile the kernel to machine code, trivially modified to emit LAP, instead  of
machine  code.    The  compiler  is  an extension of the Portable LISP Compiler
(PLC). The extensions are mainly those to support SYSLISP, and will be  further
improved by the fast-arithmetic optimization work at RAND.

4. Current and Future Tools for Standard LISP and Portable Standard LISP

  Upon the above base, we will provide a number of programming tools, that have
been,  or  will  be  developed  or  adapted  from  other  LISPs (similar to the
adaptation of some InterLISP tools into UCI-LISP).  We plan to follow the  UNIX
and  RATFOR  Software  Tools approach as far as possible, by having a number of
small,  self-contained  tools,  each  well  documented,  and  hopefully  usable
independently,  or  in concert, without too much required knowledge about other
tools.

  In the following sub-sections we will briefly describe each of the "tools" or
"tool-packages".



4.1. Language Tools


RLISP parser        RLISP is an extensible ALGOL-like language found to be more
                    convenient to people working in algebraic  language  areas,
                    particularly  computer algebra.  All of our code is written
                    in RLISP.

MODE-REDUCE is an ALGOL-68 or PASCAL like interface  to  Standard  LISP,  which
                    provides an additional MODE analysis pass after parsing, to
                    rebind  "generic"  function  names to "specific" functions,
                    based on the declared or analysed MODEs of  arguments.  The
                    system  includes  a  variety  of  MODE  generators (STRUCT,
                    UNION, etc).

                    We plan to reimplement this system to  use  SYSLISP/STDLISP
                    more effectively. We will also make the MODE-ANALYSIS phase
                    part  of  SYSLISP, so that WORDS, BYTES, ITEMS etc. can co-
                    exist more naturally.

META and MINI       These are two compiler generator systems, accepting a  BNF-
                    like  description  of  the  language, producing a LISP-like
                    parse tree, which is then further translated or transformed
                    using a pattern matcher.  MINI is smaller and  faster  than
                    META but has only a subset of the features.

Pratt Parser        This  is  an extensible, table driven top-down parser using
                    RIGHT-LEFT operator precedences and special  functions  for
                    more complex structures.

RLISP pretty printers
!                                       6


                    RPRINT  and  STRUCT  are  programs to convert Standard LISP
                    back into RLISP; also used to "tidy" older code,  inserting
                    more  structured WHILE, REPEAT, FOREACH type loops in place
                    of PROG/GOTO combinations.

LISP pretty printers
                    We have table driven programs to "grind" LISP code  into  a
                    nicely indented form.



4.2. Algebra and High Precision Arithmetic


REDUCE              REDUCE is a complete computer algebra system that runs upon
                    Standard   LISP.  This  system  is  used  stand  alone  for
                    algebraic manipulation of polynomials,  rational  functions
                    and    general    expressions,    including    derivatives,
                    integration, pattern matching, symbolic matrices, etc. Some
                    projects have (or could) use REDUCE as part of more complex
                    systems,   such   as    program    verification,    program
                    transformation, computer aided geometric design, and VLSI.

BIG-FLOAT           This  is  a  general  purpose  arbitrary precision floating
                    point package, built upon the arbitrary  precision  integer
                    package.



4.3. Debugging Tools


DEBUG               DEBUG  is  very  portable package of functions for tracing,
                    breaking and embedding functions.  Facilities  include  the
                    (conditional)  tracing  of  function  calls and interpreted
                    SETQs;  selective   backtrace;   embedding   functions   to
                    selectively  insert pre- and post- actions, and conditions;
                    primitive statistics gathering; generation of simple  stubs
                    (print  their  name  and  argument,  and  read  a  value to
                    return); and, print circular and re-entrant lists.



4.4. Editors

  Most of our users have  used  the  existing  system  editor  to  prepare  and
maintain  their code; we have provided STDLISP JSYS/FORK calls on the DEC-20 to
rapidly get in and out of the major editors. We also  have  some  LISP  "incore
editors":

EDIT                A simple line-oriented editor based on SOS/EDIT for editing
                    RLISP/REDUCE   and  some  LISP  input;  mostly  for  people
!                                       7


                    familiar with these editors.

EDITOR              A  simple LISP structure editor based on the InterLISP/UCI-
                    LISP structure editor.

EMID                A  multi-window,  multi-buffer  EMACS-like  screen  editor.
                    This  is  planned  to be the major interface to the STDLISP
                    system, and will have convenient commands (MODES)  to  edit
                    LISP and RLISP, examine LISP documentation and convert LISP
                    and  RLISP  to  and  from other convenient forms. There are
                    "autoparen" modes in  which  an  expression  typed  into  a
                    buffer  automatically  EVALs  as  soon as the expression is
                    complete.  [Currently, EMID runs on  the  DEC-20  LISP  1.6
                    based Standard LISP, and can only drive a Teleray terminal;
                    it   will   be  converted  to  SYSLISP/STDLISP  during  the
                    following months]



4.5. Source Code Control and Documentation Tools


CREF                CREF processes a number of source files,  cross-referencing
                    the   functions   and   Global  variables  used;  gives  an
                    indication of where each function is defined or  redefined,
                    its type (EXPR, FEXPR, etc), the functions and variables it
                    uses,  various undefined functions and variables, and other
                    statistics that can be selected or  deselected  under  flag
                    control.

HELP                HELP   will  display  short  text  descriptions  for  major
                    functions on request; by reading a documentation data base,
                    and should also display an activity based  HELP-TEXT  (e.g.
                    in response to ? at appropriate points).

MANUAL              MANUAL  produces a complete reference manual from (selected
                    portions?) of the HELP/MANUAL data base

  [Both HELP and MANUAL require a considerable amount of work in the conversion
and writing of pieces of text; perhaps these can be generated directly from the
source code by the including of special comments: %H %D etc; we  also  need  to
coordinate with the SCRIBE sources for the various documents already written]
-------

∂03-Apr-81  1205	CSVAX.fateman at Berkeley 	Comments on your original call   
Date: 2 Apr 1981 14:53:26-PST
From: CSVAX.fateman at Berkeley
To: engelmore@usc-isi
Subject: Comments on your original call
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  3 Apr 1981

(These notes are probably not fit for distribution, but are indicative
of my understanding of some of the issues, and my view of possible
directions.  Shostak's paper echoes at least some of the same points.)

       Call for a Discussion of Lisp Options

       IPTO recognizes both the critical need of our research community
       for modern computer resources and a responsibility to provide the
       resources necessary to maintain a high quality of research.  This
       message focusses on the AI community, most of which uses Lisp as
       its primary programming language.  Our current effort to meet the
       need for more computing power (both CPU cycles and address space)
       is confounded by the current multitude of options facing us in both
       hardware and software.  Our budget, of course, is finite, and
       necessitates our choosing the best possible investment strategy.
       In order to formulate that strategy and a management plan to
       implement it, we need to discuss the options with you.

       My primary concern here is not hardware, but software.  The
       long-term hardware issues will be dealt with once the software
       question is resolved, but some discussion of hardware is relevant
       (see below).  There are now several respectable Lisp dialects in
       use, and others under development.  The efficiency,
       transportability and programming environment varies significantly
       from one to the other.  Although this pluralism will probably
       continue indefinitely, perhaps we can identify a single "community
       standard" that can be maintained, documented and distributed in a
       professional way, as was done with Interlisp for many years.
This seems worthy of consideration.

       Here are some of the issues that need to be sorted out:

          - Language Development:  There are now a very large set of
            Lisp dialects and sub dialects -- Interlisp, MacLisp,
            CADR-Lisp, Spice-Lisp, Franz-Lisp, NIL, UCI-Lisp,
            "Standard Lisp", MDL, etc.  What are their relative
            merits and significant differences?
Other than cosmetic differences, the differences between these systems
tends to be philosophical and to a certain point, antithetical.
E.g. to the extent that debugging aids cost in performance, MacLisp
chooses the performance, Interlisp generally (but not always) chooses
debugging.  "Standard Lisp" is not sufficiently complete for much
exploratory work, because any given implementation must differ from
the "Standard" to be reasonable, and researchers will use that
extension that is non-standard.  CADR-Lisp is oriented toward displays,
novelties such as message-passing, micro-compiling, and stand-alone
computing features.  Franz is a low-cost implementation of sufficient
MacLisp to compile Macsyma on a new large-address machine, which also
addresses the UNIX environment more directly (calling of non-Lisp).
Franz, MacLisp, CADR-Lisp, and NIL address the issue of bignumber
arithmetic.  UCI-Lisp appears to be a derivative of an earlier
version of MacLisp/Stanford Lisp, cleaned up, enhanced by some useful program
development tools (perhaps from Interlisp). UCI-Lisp is smaller
than MacLisp, I believe.
NIL is a virtual-machine lisp (or will be...). 
I do not know anything about Spice-Lisp; MDL is (was?) a MacLisp
dialect, unless it has been re-implemented.
	    Is there an
            opportunity to combine any of them as variants of a
            common base, supported by a single implementation?  How
            much compatibility is needed between dialects?
There is a possibility of running a program from one system on another
right now by "conditionalization" at read or compile time.  This
is done now with Macsyma code conditionalized for PDP-10 ITS,
Lisp Machine, DEC-20, Multics, and Franz.  This puts a burden on
the compiler or interpreter.  There is substantial compatibility
between dialects IF the programmer is aware of the desirability
for compatible programs.  The expert X-lisp programmer	can undoubtedly
write a program or system which can be run only and exactly on X-lisp.

          - Programming environments: There are two main Lisp
            programming environments:  Interlisp and Maclisp. These
            environments comprise a set of useful functions, a set of
            conventions and a philosophy of programming.  How
            independent are these features from their respective
            language dialects?
There is no important conflict here in the sense that features from
Interlisp tend to migrate to MacLisp, where they are evaluated for
usefulness by the MacLisp community, and either catch on or die.
Examples: the implied progn for (lambda() x y) caught on.
The issues in Sandewall's paper  have been debated, and each
set of users left pretty much intact.  On a system with huge
address space, it is likely that advocates of either approach
can be accomodated.
The file-development vs. "environment" development situation
is probably not a Lisp language problem, since the MacLisp
(and Franz, UCI, ...) file-development system includes non-Lisp
editors, etc.
A few features are antithetical though, because they are debug/speed
trade-offs. These could be handled in part by extension of
Interlisp for "block compilation in the MacLisp mode."
	    Can both environments be supported
            within a single system?
I believe, yes, on a sufficiently powerful and large address system.
	    Where lies the future with
            respect to networking, or to utilizing the capabilities
            of displays?
These need not be tied to Lisp.  In Franz they are UNIX environment
access issues.  Although access to displays and networking are
important Lisp issues, they are separable, I believe.

          - Portability: Should we be investing more vigorously in
            the development of a highly portable programming language
            (and environment) so we can be less concerned about
            hardware choices?
I thought that the adoption of UNIX was an attempt to separate the
environment from hardware choices (and probably UNIX is the only
real option at this time).  If this is the case, it makes
sense to provide a Lisp which is portable to different hardware by
virtue of the UNIX environment.
            What work needs to be done to minimize
            the effort of transporting Lisp to the many
            microprogrammable personal machines that are appearing
            (or will soon appear) on the market?
Providing a UNIX environment would seem to be the most reasonable approach
for the future, although none of the microprogrammable systems on the
market or on the verge use this, to my knowledge.
(The personal machines which are "microprogrammable":
the Symbolics (and LMI) Lisp Machines, the 3 Rivers PERQ, the Xerox
Dolphin, ... maybe potential small VAX, B1800, ... anything else?)
I do not know that there is really such good evidence
micro-programmability is such a boon to the execution of Lisp. There
is evidence of compact code size, but I know of no study of the
actual effectiveness of CDR-coding and tagged data.  There is some
evidence that a reduced instruction set is appropriate not only for
Lisp, but other higher level languages, and that the speed of the
call/return is most critical.
In fact, another concern might be 
how to transport Lisp to  "microcomputer based" personal machines, such
as Apollo's DOMAIN, the Nu machine, various other Z8000 and MC68000
based system, and their successors (National, Intel chips), which
have the potential to be fast, extremely cheap, reliable.  We believe
that transporting C to these machines is inevitable, and that as
a minimum, Franz Lisp can be brought up in very short order.


          - Other issues: Although this meeting is about software,
            there are some machine-specific concerns that we can't
            ignore.  For example, the Vaxen are and will probably
            continue to be a very widely used line of machines.
            What's the future of Lisp for these machines?  More
            specifically, what are the pros and cons of Franz Lisp as
            a near term solution to running Lisp programs on Vaxen?
PRO: There is no other distributed Lisp system, on the VAX, right now.
It is written in C.  It runs under UNIX (and also under VMS).
It has many MacLisp features (reader syntax, bignums, similar structure
for compilation).  
Some Interlisp features are supported (iteration, simple editf)
Source code is available.
Interfaces with Fortran, Pascal, C, (presumably ADA when available).
CON:
Franz was written as a system programming language
for the implementation of Macsyma, and could use work, especially
on the user interface. Some of the hairier Maclisp features are
implemented only in the compiler, not the interpreter.
It is not Interlisp.
Error checking could be more elaborate.
The compiler uses (at the moment) a small amount of UNIX-licensed code,
so that VMS-only systems are at a slight disadvantage. 
OTHER:
There is a Pascal virtual machine Interlisp implemented by Havens at
Univ. Wisconsin; there is alleged to be an interpreter of some sort
at Univ. of Mass.  NIL progress? Interlisp progress?
            If the Vax Interlisp and/or NIL effort fail to produce a
            useful product, how big an effort would it be for their
            users to translate their programs to Franz Lisp? 
I believe there are no NIL users now (outside of those using NIL 
to implement NIL) so translating their programs to Franz may
not be a serious consideration.  However, (i) there is a 
NIL simulation written in MacLisp, which presumably
could be moved to Franz.  (ii) Useful aspects
of NIL can be more directly ported to Franz, if appropriate, by
adding to the kernel written in C, or to the Lisp-language
support.

Interlisp users, in my experience are of two types.  Those who have a
working system, pretty much, and want to run it (e.g. Boyer-Moore
theorem prover), and those who view Interlisp as an environment
for program and system experimentation.
Running programs can be converted to run under Franz Lisp with
relatively small effort.  The conversion of Interlisp code
to MacLisp has some history of success; there is evidence that
Franz is also a reasonable target. Perhaps Haven's work would
be useful?
The conversion of the Interlisp environment, complete with
glitches, switches and JSYS's, is a much more difficult task,
and one which, if truly desired, could be done but
is probably going to conflict in some matters,
with advocates of MacLisp or UCI Lisp.  There are ways
of producing alternative systems by conditional compilation
of the kernel, and various redefinitions of the interpreter,
compiler, etc, which can supply substantial levels of compatibility,
but not without cost.  I suspect that the users will have
to be carefully examined to see whether they are really
wedded to Interlisp in its fullness.  A casual question
will receive a casual answer "Yes, I want it to be just exactly
like Interlisp."  which is, I suspect, not fully responsive.
(I do not know how to make them consider other options.)
Presumably Bob Balzer has some sort of mandate here.

I am not familiar enough with the Xerox personal computer
versions of Interlisp (especially the newer ones) to comment on
the usefulness of Interlisp specifically for screen-oriented
transactions.
            How essential is the use of microcode on the Vax for
            efficient Lisp execution?
This one I can answer:  on the VAX 11/780, microcode is definitely
NOT useful.  Unless the hardware itself is altered, the entry/exit
sequence from user microcode is so costly, and the instruction set
so useful as given, that there is no way to win. [Ref. R. Tuck's
M.S. Project at Berkeley, 1979].
On the VAX 11/750, there is a different microcode structure which
is alleged to be better, but even without that, early Franz
benchmarks suggest the 11/750 is as much as 90% of a 11/780 on
non-floating-point Lisp computations.

	    How should the Lisp executions
            change for use on a single user Vax?
Depends on what you mean here. Factoring in paging?
	    What about
            exploiting the the large address space under TOPS-20 as a
            near-term alternative for Interlisp or other Lisp
            dialects?
Near-term implies someone can put together such a system for delivery
soon.  Hendrick's R/UCI lisp is apparently not there yet. It is also
unclear that it would be cost effective relative to Franz on the VAX.
In fact, Franz could be brought up on a DEC-20.

       I would like to propose a panel of users, implementers and IPTO
       program managers to address these issues with the objective of
       developing a plan for future Lisp development, maintenance and
       support.  The two main items on the agenda are 1) examining the
       alternatives, and 2) formulating a plan of attack.

[Do I know the alternatives?]
Mostly Self-serving Plans of attack
(1) Pick a (high power) personal computer and buy gobs. Put them in
local and long-haul networks all over.
(2) Pick a software system that is portable, and put it
on one or more personal computers/mainframes. Plan for future expansion
of capability. (Standard Lisp?)
(3) Pick a software system and port it at whatever cost to everything of
interest.
(4) (Franz Plan?) Distribute the basic Lisp system, denote an Arpanet
site as an integration center; solicit contributions from users,
provide facilities according to specifications from customers
(Lisp or C code); provide tested out versions which can be parameterized
for different systems (e.g. VAXen of different sizes, maclisp-like or
interlisp-like, 68000-based systems [Nu, Apollo]). Perhaps occasionally
consider revision of the base system to incorporate major new
ideas. One of these times could be a revision to Balzer's Interlisp
or NIL, but this would have to be done without disturbing the
users; from past experience we know that there will be offshoots
generated locally; these produce varieties of less-maintainable
sorts, but are probably inevitable, given the hacking that
goes on in AI.
  {Q: how to incorporate/reconcile Dolphins, LM's, Interlisp-10 fanatics,
NIL fans? A: Let them flourish too. Probably a compatibility
package for Interlisp users could be written in Franz
(like the Maclisp one).  
(5) XLisp plan? (same as Franz plan, but you don't have a system now!)

(6) Long term. Develop a more modern lisp based on the capabilities
of time-shared large-memory computers and also  on personal machine
experience from Altos, Dolphins, Dorados, CADRS. Incorporate as a 
subset or special context environment, sufficient support that
old Interlisp or MacLisp stuff can be run with small change.
Do we really need an AIDA?

Notes on conversion of Interlisp to Franz:
There are really two levels of conversion:
Level I

(1) Interlisp function "mumble" does not exist in Franz. So write it.

(2) Interlisp construction "mumble" (e.g. selectq) does not exist
(with same semantics) in Franz.  Convert it via macro-expansion or
compilation to a Franz construction.

Level II

Given a complete ascii file in Interlisp format, convert it to
a complete ascii file, ready for compilation by Maclisp or Franz.
This usually involves setting up dependency information, since the
Interlisp technique of reading everything in all at once and then
compiling selected items is simple but incompatible and less
efficient than the Maclisp approach.

We have considerable experience in both Levels, although there
are still shortfalls in some respects if the user expects to be
able to debug identical code on interlisp systems and Franz
(or maclisp systems)... some of the code will have been transmuted
to other forms at run time.



∂04-Apr-81  2212	JONL at MIT-MC (Jon L White)  
Date:  4 APR 1981 1406-EST
From: JONL at MIT-MC (Jon L White)
To: engelmore at USC-ISI
Redistributed-To: Kahn at ISI, Adams at ISI, 
Redistributed-To: Yonke at BBN, Zdybel at BBN, 
Redistributed-To: Wilson at CCA, 
Redistributed-To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
Redistributed-To: Balzer at ISIB, Crocker at ISIF, 
Redistributed-To: JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, 
Redistributed-To: Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, 
Redistributed-To: RWW at SU-AI, RPG at SU-AI, 
Redistributed-To: Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, 
Redistributed-To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, 
Redistributed-To: Engelman at USC-ECL
Redistributed-By: ENGELMORE at USC-ISI
Redistributed-Date:  4 Apr 1981

The following two sections are partially drawn from a paper in the 
Proceedings Of The 1979 MACSYMA Users' Conference, "NIL: A Perspective",
by Jon L White,  and from an internal status report prepared Jan 15, 1981. 
Currently associated with the project are Richard L Bryan, Robert W Kerns, 
and Jon L White.

←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←


Brief Overview of the MIT NIL project

	NIL is a "New Implementation of Lisp"; it is a modernization of LISP 
   suitable for implementation on any of the current generation, large-address
   space, standardly-available computers.   The design is intended to be the 
   best machine-indenpendent lisp possible, and thus no facilities are presumed
   which would require special purpose hardware for efficient operation (such 
   as "cdr-coding", or "bit-map raster display").  Rather, the language will 
   expose in the higher level more of the capabilities which are common to 
   nearly all of the likely target machines -- such as typical computer 
   arithmetic, string and character manipulation operations, objects with 
   indexed-access structure in addition to linked-list-access, and so on.

	Major goals are:
   a) To be maximally upwards compatible with MacLISP, and absorb enough of
	of the LISPMachine's facilities so as to enable most non-system
	programmers to run their applications in either environment (in 
	particular, the non-I/O oriented parts of programs should be 
	compatible with trivial effort on the programmer's part).
   b) To provide a rich set of primitive data types, which will reflect
	capabilities of typical off-the-shelf compters, and also to
	provide a general user-extensible mechanism for new data types
	(going beyond the capacity of ECL, for example).
   c) To build NIL in NIL, not merely as a curiosity, but to demonstrate the
	feasibililty of building an operating system in NIL itself;  and to
	make it easier for the end LISP user to tailor the system to his own
	requirements.  The non-LISP kernel, called the "VM" for Virtual 
	Machine, corresponds roughly to the micro-code complement in a special 
	purpose lisp machine.  We will build "VM"'s on several different 
	machine environments, in order to gain experience in generalizing what 
	are the basic primitives needed.
   d) To build an EMACS in NIL;  To build MACSYMA in NIL.  A very usable EMACS 
	has been built under Multics MacLISP, but suffers from an efficiency 
	problem and a proprietary interest limitation (by Honeywell).   MACSYMA
	has already begun the slow process of abstracting out much of the old
	code, during the re-building of it under Multics MacLISP and the 
	LISPMachine (and significant pieces of it under Franz Lisp).  Thus
	the overall performance of NIL must be high enough so that these
	systems can be competitively supported.	
   e) To provide "object-oriented" programming as an adjunct to standard
	lisp recursion style.  Message-passing semantics has been seen to be
	the "right" way to think about certain problems, but we feel that it
	is not a panacea, and thus its provision in NIL will not ubiquitously 
	displace standard lisp capabilities.
   f) To provide a peak-performance compiler, with modes of "partial
	compilation".  For example, in one mode, the output is a truly
	machine-independent code called LLCODE (for Linearized Lisp CODE), 
	which could be supported by micro-code (but surely won't be, since
	the VM design would be more aggressive when a real micro-code
	option is available).	Generation of actual machine instructions 
	from LLCODE permits a number of compile-time choices (see below
	under "Some compile time options ..."), but in general there will be 
	no need to do "block compilation" in order to achieve efficiency, as 
	most such options can be successfully delayed until runtime.
   g) To cooperate with similar efforts in reducing the low-level differences
	between dialects, so that a nearly-complete implementation of many 
	existing lisp systems can be done in "piggy-back" fashion on top of
	the NIL kernel and support code.
   h) To actualize the implementations on the VAX computer, on the extended-
	addressing PDP-20/80, and on the Nu machine (a personal computer
	project to be undertaken by the Laboratory for Computer Science,
	which was to be based on a MC68000 micro-processor, but which has
	been postponed due to contract failure with the initial hardware 
	supplier).  Where the operating system supports it, the compiler and
	assembler produce sharable, read-only object segment in the file
	system which the loader merely "maps" in and links up; in the VAX 
	implementation, we will have a dynamic loader also for FORTRAN object 
	files, with a relatively efficient interface between LISP and 
	FORTRAN compiled code.
   i) To provide the basis for continued development, as new ideas come forth.
	The M.I.T research environment is a continuously active one, so a major
	consideration of NIL is to provide a vehicle for experimentation with 
	all the new ideas cropping up in the worlds of AI and of Programming 
	Languages.   In particular, the data type EXTEND, which is essentially 
	a VECTOR (that is, indexed-access block of S-expressions) with a link 
	to a type-descriptor, has been a general enough mechanism for us to 
	experiment recently with several different message-passing protocols,
	since the VM design allows a slot in the type-descriptor for the user
	to provide a "message-sending" interpreter.

	We have decided upon a typed-pointer scheme rather than segmenting the
   address space into chunks of homogeneous data types; as a consequence,
   memory management calls for a "Stop-and-Copy" garbage collector rather
   than the "Mark-and-Sweep" variety.  Additionally, storage will be divided 
   into a static area and a living area; normally, the GC will only reclaim 
   addresses from the living area, but under explicit user invocation, a 
   "hyper" GC will reclaim also from the static area (such a split has been 
   found to be of paramount importance in reducing the memory management
   overhead in PDP-10 MacLISP.)   We have an very efficient method of 
   achieving the kind of limited FUNARGs known as CLOSUREs on the LISPMachine; 
   and we have provided for the future installment of a co-routine facility 
   similar to the "stack-group" of the LISPMachine.  We have rejected the 
   spaghetti-stack model of control in favor of CLOSUREs and co-routines.

	Some compile time options during the generation of actual machine 
   instructions from LLCODE are of interest to the user.  In particular,
   the choice of data access (such as CAR/CDR's, i'th character of string,
   array referencing and so on) is determined by a switch to be one of:
        e1) closed-compiled, with a "mini" subroutine call so that all
	    operations may be certified for application to proper data.
	    This option will likely offer a speed improvement over
	    interpretation of between 5 and 10.  More importantly, it
	    will allow one to experiment with variant storage methods;
	    in particular, one can try out various "real time" or
	    "incremental" garbage collection methods merely by modifying
	    the "mini" subr which implements the data access.
	e2) open-compiled, with a few instructions for data-type 
	    certification installed before the actual data access
	    This option will offer a speed improvement over the closed 
	    option of between 2 and 4 (and thus over interpretation by
	    10 to 40).  Significant pieces of semi-debugged user code
	    may want to be compiled in this way.
	e3) "fast, open-compilation", as currently happens with PDP-10
	    MacLISP;  this "speed first" option makes no guarantees about
	    what happens when an operation is applied to an inappropriate
	    datum.  This will offer a speed improvement over (e2) of
	    possibly a factor of 1.5, and will also produce significantly
	    more compact code.  This will be the option for secure, 
	    debugged code.

	A message-passing protocol is being built which will be at
   least upwards-compatible with the "Flavor" system of the LISPmachine.
   One feature of importance to NIL user is that the "inheritance
   hierarchy" is flattened at a compilation and flavor-composition time,
   so that no time-consuming search through a tree of super-classes need be 
   made at invocation time.  Quite likely, message-passing will be only
   slightly slower than functional invocation, even where there is no
   special hardware support for such operations.



Current Operational status, and future plans:

	Since early 1979, a large body of "compatible" software has been
   built up, in which the same source files serve as both NIL and MacLISP	
   system code.  Portions of this body of code have been compiled and
   run on the LISPmachine and on Multics MacLISP to demonstrate its
   machine-independent nature (the LISPMachine's own body of code development
   supplies many of these utilities, in a "compatible-in-spirit" form, and
   though there seems to be no advantage to sharing code between projects,
   there is still a degree of design cooperation).  By early 1980, a 
   "piggy-backed" version of NIL was running on top of MacLISP.  Using this 
   emulated NIL environment, we developed some VAX-specific tools -- assembler
   and compiler -- and were able in May of 1980 to demonstrate pieces of the 
   whole system running on the VAX.  By late summer of 1980, we had a "toy"
   interpreter running on the VAX, despite lingering bugs in the VM support.
   Following this section is a more extensive progress report which was
   prepared in mid-January 1981.
	A new LISPMachine manual is on the press right now, and a comprehensive
   MacLISP manual will be press ready by June 1981.   These two manuals will
   serve the initial need for a NIL manual, with a modest sized pamphlet
   detailing the differences and limitations of NIL from its "friendly
   cousins".   Also by June 1981, a pilot version of the NIL system for the
   VAX, running under the VMS operating system, will be ready for some 
   non-antagonistic testing/usage by selected sites.  During the summer, we 
   will continue to replace parts of the system which currently are serving 
   as "scaffolding" for our work, but which are not in final form.  Later this 
   year, we will begin a true NIL implementation for the extended-addressing 
   PDP-20/80  (as opposed to the "piggy-backed" version running in the 18-bit 
   address PDP-10).  Plans for bringing NIL up on a micro-processor are 
   somewhat nebulous right now, due to postponement of the Nu project, and to 
   manpower shortages. 
	NIL will be a fairly large system, approximating an operating system 
   in scope, and will itself require a large segment of memory/address-space; 
   but an important feature of the NIL design is the dynamic linking of 
   position-independent code modules from the file system, much the way 
   Multics object segments are linked up, and this will increase the mutual 
   sharabililty of all code segments.  This will be important when there are 
   several different applications, each built on top of a NIL, running on the 
   same VAX (currently, only DEC's VMS operating system permits us the 
   flexibility for the sharing of dynamically-linked code, and we are indeed 
   doing this, but the Berkeley UNIX may soon supply this capability too). 
   Because of the high degree of sharing of the read-only code-segment files,
   We expect to be able to run several (3 or 4?) simultaneous users of a large
   applications (such as MACSYMAs, NIL-based text editors, etc) on the 
   VAX 11/780;  jobs would be incrementally sharing disk pages even when 
   they were not initiated from the same "dump" file.
   

←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←


Current Progress on the NIL System, and on the VAX Implementation of It
							Jan 15, 1981


    In May of this past year, we demonstrated certain pieces of each 
component of the system as being in operation, but as a whole it 
wasn't debugged enough to run even a toy LISP system;  in mid August, 
we demonstrated a "toy" LISP system, which had some of the rudiments 
necessary to a typical LISP (mini-reader, partial EVALuator, and 
mini-printer).  Even though this system was at a "toy" level, it
did help us find many bugs in each component of the system -- from
the compiler/assembler to the "virtual-machine" code -- and a good
deal of debugging at each level has proceeded since then.
    We are currently still cross-compiling and assembling from the NIL 
emulator systems running on the PDP10, and may be in this mode for another 
four months.  Between mid September and mid November, we re-structured
our PDP10 emulation environment, partly to help extract from it a more 
implementation-independent body of code, and partly to reduce the size 
of our "piggy-backed" system (the NIL emulator runs "on top of" the
PDP10 MacLISP system).  Since NIL intends to incorporate into its
runtime system a large measure of the MIT LISP Machine environment,
this has severely impacted the addres space available on the 18-bit
PDP10.  [The MIT LISP Machine will henceforward be referred to as
merely "LISPM"].

On a component-by-component basis, our current status is:

VM ("Virtual Machine")
   Certain facilities are presumed to exist for the runtime environment
   of NIL; with a very flexible micro-code structure, one could put most 
   of these facilities into micro-code, but currently we have no plans to
   put any of them into micro-code on the VAX.  Typical of such items are 
   data-accessing primitives which, at runtime, certify the correctness
   of the data type, primitive dispatch for the interrupt system, primitive
   i/o routines, primitive error service, and a cold-load startup routine 
   which sets up an initial stack and initial dynamic storage areas.
   The Vax NIL also contains VAX debugger, and some additional LISP-oriented
   debugging routines.  Many of these functions are called "minisubrs",
   due to the fact that when not microcoded, they can be called out-of-line
   using whatever fast, non-recursive subroutine facililty is available
   (JSB seems to be the only candidate for the VAX).  Some of the mini-
   subrs are "essential", in that the correct compilation of some piece
   of code will require them;  others are "inessential", in that they
   will be used only when the user requests, by means of a compiler
   switch, full error checking in the runtime code.
	Currently, the VM consists of a motly collection of BLISS-32 code,
   and of a macro-assembler code which must be pre-parsed and converted
   into MACRO (we do this in order to fill in what we percieve to be
   the gaps of VAX MACRO).   Although there are more than 15.K lines of
   this kind of code, one of the members of the group has been working
   on a lisp-coded data base which would permit the mechanical generation
   of most of the minisubrs and of the initial data tables;  this should
   help the conversion of this code to run under UNIX (if necessary).


ASSEMBLER/LOADER
   The NIL-assembler for the VAX, written entirely in the NIL language,
   is working fairly well since about July 1980.  It is a non-macro, 
   symbolic assembler, with a few pseudo-ops, and this is adequate for
   a "lisp" assembler.   Current faults are mainly that no Jump-optimizations 
   are being done -- all forward-branching jumps are "longword", and 
   conditional jumps are done by conditional branches around a jump -- and 
   that some PDP10 dependencies may remain.  The assembler produces a 
   byte-image file, which is then packed into a bit-stream suitable for 
   file-transfer from the PDP10 to the VAX;  one part of this file is like 
   a "program section", in that it represents position-independent, read-only
   instructions; another part is for storing LISP s-expressions, which have 
   to be "relocated" upon loading.  Since instructions do make reference to 
   this "s-expression" data, there is no way to use the linking-loaders of 
   other VAX systems.  
	The loader is written in BLISS-32, and ultimately the intention is 
   for it to page-map in the read-only pages of a file, to gain sharability 
   among multiple users, but currently it is merely inputting these pages.
   Although the i/o usage of the existing loader is VMS dependent, it
   should be possible to transcribe it fairly straightforwardly into
   any other VAX operating system (such as UNIX).  Hopefully, someday,
   we would like to do what the group at Xerox's Palo Alto Research
   Center have done -- write the loader in NIL itself, and bootstrap
   a "cold-load" system by running in a "shadow" mode of the NIL
   emulator (which of course would be running on the PDP10).

COMPILER(s):
   By May 1980, a primitive versison of an incremental NIL compiler
   had been coded;  it is incremental in that it first produces a
   generally machine-independent code called LLCODE (acronymic for
   Linearized-Lisp-CODE), which is somwhat akin to the basic output
   of the LISPM compiler.  LLCODE is then converted into LAP
   (the machine-language for the lisp assembler mentioned above)
   for the VAX by an independent module which is much like a second
   compilation phase (hence the appelation "incremental compiler").
   By mid-August, this compiler had been completed for the full
   NIL language, with some work left to do on the optimization of
   registers during the generation-of-LAP phase.  
      One member of the project began, in early April 1980, the conversion of 
   a partial NIL compiler built for the S1;  this compiler was experimental 
   and uses a strategy of compilation outlined in Guy Steele's master's
   thesis;  another member of the project joined this effort in mid May,
   and by mid August, a subset of the NIL language could reliably be
   compiled for the VAX.  The particular optimization strategies of this,
   compiler, while interesting from a theoretical point of view,  quite 
   likely have very little, if any, payoff on the VAX (since it has a fairly 
   large memory cache), and under the NIL function-calling sequence (since it 
   uses the VAX CALLS instruction.  Consequently, interest in furthering 
   this compiler has waned; but out of its "ashes", one group member,
   by mid-December, had distilled a slightly-more-comprehensive but very 
   simple, stack-machine compiler which he is using as an working tool.


EVALUATOR
   The "toy" interpreter mentioned above was unduly restricted due to
   bug in the "Virtual-Machine", especially the failure of the module
   which does the dynamic function-call transfer (i.e. given the symbolic
   name of a function at runtime, create a subroutine call to it, with
   the arguments put into a stackframe).  As many of these bugs have been
   caught and corrected this fall, a more comprehensive evaluator has been
   tested out now (Jan 1981).   By June 1981, an evaluator of the
   general capability of MacLISP will be available -- but of course all
   written in NIL itself.  Our future plans call for an evaluator which
   has a fast, lexical-variable scheme;  the structure of this idea has
   been worked out by one of us, but it is not of paramount practical
   importance now (it will follow some of the ideas of the SCHEME language
   designed by Gerry Sussman and Guy Steele).

RUN TIME ENVIRONMENT
   The NIL "environment" is much like a subroutine library, along with
   a mixture of interpretive/compiler-oriented aids.  A large part of
   this environment is inspired by the LISPM's environment, and is 
   sufficiently rich that most programs written for the LISPM will run 
   under NIL (major exceptions are, of course, I/O specialties
   the the newer, experimental parts of the CLASS system).  Fortunately,
   this library is about 90% written in a compatible subset of NIL
   so that it forms the basis of the NIL emulator on the PDP10 -- thus
   it has been possible to develop extensively and debug this code
   on the PDP10.  It currently consists of just about 10.K lines of
   rather dense NIL code, which is source-compatible for use either by a 
   native NIL (such as on the VAX), by MacLISP on the PDP10, by the LISPM,
   and (eventually) by MacLISP on Multics.


LISP DEBUGGING AIDS
   Very little has been coded as of Jan 1981.  Plans call for a stack-
   analyzing tool similar to that available in MacLISP and INTERLISP;
   Each compiled function will eventually also contain a "road-map"
   so that variables which are stack-allocated by the compiler may
   be referenced by their original symbolic names.
[note: by March 1981, a fairly impressive amount of one of the MacLISP 
 stack debugging tools had been brought up on the VAX;  nothing has been
 coded for the "road-maps" however.]

NILE
   An EMACS editor, written entirely in NIL, has been developed by
   an MIT undergraduate student, who also did a significant amount of
   work on the Multics EMACS (written in Multics MacLISP).  This
   system has been running, after a fashion, since December 1980, under
   the NIL emulator on the PDP10.  Possibly new problems will arise
   when it is brought up on the VAX, since a major feature of it is
   its interactive use of a CRT screen.


CLASS SYSTEM
   Perhaps the most exciting, innovative idea in programming languages
   now is that of "object-oriented programming";  a first-version of
   this technology was implemented for the piggy-backed NIL in the
   spring of 1980, and major improvements were done during November
   and December 1980.  Support for "message passing", a fundamental
   operation in such systems, has been provided in the VAX VM, and
   in the compiler, but more work is needed to get the most recent
   system to be free of PDP10 dependencies -- hopefully this will be
   done by the end of March 1981.  Our goal is to be at least modestly
   compatible with the kinds of usages available on the LISPM,
   and this will no doubt mean continuing development, since this
   whole area of endeavor is somewhat new.   
	A record-structure facility, called DEFVST, has been completed 
   which depends (partially) on having a CLASS system available -- by
   connecting each structured object into a "class", or "sub-class"
   of objects, there is full extensibility at runtime.  DEFVST is
   entirely written in NIL -- only the low-level "class" support must
   be provided by the compiler and VM.



∂06-Apr-81  1220	YONKE at BBND 	Interlisp-Jericho Status Report    
Date: 5 Apr 1981 1637-EST
Sender: YONKE at BBND
Subject: Interlisp-Jericho Status Report
From: YONKE at BBND
To: Engelmore at USC-ISI
Bcc: Yonke
Message-ID: <[BBND] 5-Apr-81 16:37:22.YONKE>
Redistributed-To: Kahn at ISI, Adams at ISI, Yonke at BBND, Zdybel at BBND, 
Redistributed-To: Wilson at CCA, Fahlman at CMU-10B, 
Redistributed-To: Guy.Steele at CMU-10A, Balzer at ISIB, 
Redistributed-To: Crocker at ISIF, JONL at MIT-MC, Moon at MIT-MC, 
Redistributed-To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
Redistributed-To: Hearn at RAND-AI, Henry at RAND-AI, 
Redistributed-To: Hedrick at RUTGERS, Green at SCI-ICS, 
Redistributed-To: Hendrix at SRI-KL, Shostak at SRI-KL, 
Redistributed-To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
Redistributed-To: Feigenbaum at SU-SCORE, RWW at SU-AI, 
Redistributed-To: RPG at SU-AI, Fateman at BERKELEY, 
Redistributed-To: Griss at UTAH-20, Deutsch at PARC, 
Redistributed-To: Masinter at PARC, Sheil at PARC, 
Redistributed-To: Lee.Moore at CMU-10A, Engelman at USC-ECL
Redistributed-By: YONKE at BBND
Redistributed-Date:  6 Apr 1981

               Status Report on Interlisp-Jericho

1.  Project Description.

The Interlisp-Jericho project at BBN has the same motivations and
goals as Xerox's Interlisp-D project.  Interlisp-Jericho differs
from Interlisp-D only in its underlying implementation -- the
hardware and macro-instruction set.  At the hardware level, our
goal was a powerful personal computer with high-resolution bitmap
display capability.  After a survey of available hardware (in
1979) it was decided the best approach was to build our own
machines -- hence Jericho.  Interlisp-Jericho is an upward
compatible implementation of Interlisp-10.  Also along with
Interlisp-D, Interlisp-Jericho is designed to raise the
communication medium with the user from a "TTY" (ala
Interlisp-10) to sophisticated IO devices.

2.  Distinguishing Features.

Interlisp-Jericho is implemented entirely in Interlisp and
micro-code (i.e.  there is no "system implementation language").
It has 32 bit typed pointers (6 bits of type -- including user
data types, 2 bits for CAR/CDR coding, and 24 bits of address).
There are three types of numbers:  64 bit integers, 64 bit
floating point, and 24 bit immediate integers (SMALLP).  It is
shallow bound and has a complete implementation of spaghetti
stacks.

3.  Interlisp-Jericho Status and Hardware.

Interlisp-Jericho currently is running substantially the entire
Interlisp-10 programming environment (minus those features
explicitly stated in the Interlisp-10 manual as dependent on
TENEX/TOPS20, although we have in many cases simulated these).
We are currently involved in three major efforts:  (1) developing
user-interfaces to the advanced IO hardware, (2) eliminating or
mediating discovered incompatibilities with Interlisp-10, and (3)
improving the performance Interlisp-Jericho now that it is
operational (e.g.  improving the paging algorithm).  We currently
do not have a garbage collector, but plan to implement one in the
near future.

Jericho is a 32 bit micro-programmable computer with a 200MB
local disk and a local network connection.  Physically, it is
roughly the size and shape of a large two-drawer filing cabinet,
except that it is half again as wide.  Jerichos can have from
128K to 4M 32 bit words of physical memory.  Although there are
24 bits allocated for addresses, the current hardware will only
address 22 bits of 32 bit words.  This is a temporary limitation
due to the availability of chips for the pager -- the design
allows the full 24 bit address space and individual machines can
be easily upgraded.  Bitmap memory, which is part of the virtual
address space, can be from 1 to 9 1024x1024 bitmaps.  These can
be split up among black and white displays and/or grouped for
gray-scale or (via a color-lookup table) color displays, as
desired.  Jericho includes four independent micro-process
controllers.  These are dedicated to the control of the local
disk, the network connection, low-speed devices such as the mouse
and keyboard, and the user process.  Besides multiple bitmaps,
Jericho includes audio output that has already been used for
digitized speech and digitally synthesized music.

4.  Further Development.

As stated previously, we are currently "fine-tuning" the system
and developing advanced IO capabilities (estimated completion of
these efforts is summer of 81).  Extensions to the Interlisp
language are anticipated, but not without coordination with other
Interlisp projects such as Interlisp-D and Interlisp-VAX.
Interlisp-10 has been under cooperative development between Xerox
and BBN for many years, and we hope and expect that this
philosophy of cooperation will continue into the future among all
implementors of Interlisp.  Our specific interests for extending
Interlisp include efficient facilities for object-oriented
programming, implementation of "name spaces" or some other
treatment of the name collision problem, and providing more
powerful LAMBDA types.

5.  Comment.

The Interlisp-Jericho project at BBN started much later than the
Interlisp-D project at Xerox.  Our hardware, virtual machine and
compiler differ from theirs, but nevertheless we benefited
considerably from their trail-blazing, and are much indebted for
their cooperation.

∂02-Apr-81  1744	BALZER at USC-ISIB 	INFORMAL INTERLISP MEETING    
Date:  2 Apr 1981 1735-PST
From: BALZER at USC-ISIB
Subject: INFORMAL INTERLISP MEETING
To:   YONKE at BBN, ZDYBEL at BBN, CROCKER at ISIF,
To:   SOWIZREL at RAND-UNIX, GREEN at SCI-ICS, HENDRIX at SRI-KL,
To:   SHOSTAK at SRI-KL, GENESERETH at SU-SCORE,
To:   VANMELLE at SUMEX-AIM, FEIGENBAUM at SU-SCORE,
To:   RWW at SU-AI, RPG at SU-AI, DEUTSCH at PARC,
To:   MASINTER at PARC, SHEIL at PARC, ENGELMAN at USC-ECL,
To:   MARK at ISIF
cc:   BALZER

In preparation for the IPTO Lisp support meeting next week, it seems to me
that it would be most helpful for the INTERLISP subcommunity to get together
to discuss issues particular to INTERLISP and see if a consensus exists. 

Such consensus, or lack thereof, is an important input to the Lisp support 
meeting(the agenda of that meeting precludes this activity from occuring during
the meeting, and the diversity of interests represented would complicate the
discussions). Therefore, it appears that we should meet beforehand. Since there
is such limited time, I suggest that the preceeding afternoon is most
appropiate. Gary Hendrix has offered facilities at SRI for this discussion.
He will inform us of time and place in a separate message.

                           PROPOSED AGENDA

I. Assess the strength of the community's committment to INTERLISP as a dialect
   and as an environment.

   A. If it is strong, discuss INTERLISP based scenarios

      1. How credible and attractive are the exisiting INTERLISP implementation
         efforts?

         a. XEROX-PARC: D-0

         b. BBN: Jerico

         c. SRI: Foonly

         d. ISI: Vax

      2. Who will be responsible for INTERLISP development(both dialect and
         environment)?

   B. If it is moderate, discuss how compatible another dialect and environment
      would have to be to be attractive. 
      1. How difficult are the technical problems of providing such a level
         of INTERLISP compatibility within some other dialect?

      2. Who would be responsible for creating and maintaining such a 
         compatibility package?

   C. If it is low, discuss which dialect and environment is most attractive

II.Discuss the short term hardware support options

   A. Hardware options

      1. D-0

      2. Jerico

      3. Foonly

      4. Vax

      5. LMI and Symbolics Lisp Machines

      6. Durado?

   B. Personal machines versus Time Sharing
      Do adequate personal machines exist at a reasonable price, or must we
      rely on time shared machines to lower per user costs?


Suggestions for augmenting and/or changing the agenda are solicited.

This message is being distributed to all INTERLISP users and/or implementors
invited to the Lisp support meeting. Please feel free to pass it on to any
collegues who might be interested and able to attend.

BOB
-------

∂06-Apr-81  2304	Barstow@SUMEX-AIM 	Future LISP Environments  
Date:  6 Apr 1981 1911-PST
From: Barstow@SUMEX-AIM
Subject: Future LISP Environments
To:   englemore@ISI
cc:   barstow@SUMEX-AIM, gabriel@SU-AI

Bob-

The following is a brief statement about Schlumberger-Doll's
hopes for LISP inthe future.  It is being typed on a terminal
with a flakey space bar, so forgive the typos.  Of course,
were we on the net, this would all be much easier...

Schlumberger-Doll Research faces the same dilemma as the rest
of the LISPcommunity.  The numberof options for LISP programming
and computing envirnments is growing; the trade-offs are
difficult to ass ess, but we cannot wait too long before choosing
the direction(s) on which to concentrate.

From our perspective, it is important to recognize the existence of
two separate environments.  The research environment is primaril
concerned with developing experimental software:  the important
aspects of the environment are flexibility, ease of debugging,
and facilities forproducing clear source code.  The application
environment is primarily concerned with developing and running
production software:  the important aspects of the environment
are the efficiency and robustness of the compiled code.
Our major activities are currently in a research environment,
but we expect eventually to be running AI programs at a large
number of application sites (several dozen field interpretation
centers, perhaps a thousand logging trucks).An important characteristic of these sites is that there is not
a resident LISP wizard; the programs must be extremely robust even
when run by naive users.  In our case, for the forseeable future
atleast, the application environment will certainly be based on t he VAX,
probably running under VMS.  We would obviously prefer it if the research
environment were the same, but there must at least be a way for a program
to make a smooth transition f romone to the other (e.g., source code
compatibility).

We do not feel particularly strongly about which of the
alternative L~ISP environments we eventaully use.  We have both
INTERLISP and MACLISP users, generally reflecting their previous
experiences more than major technical issues.  On two specific points,
however, we have strong opinions:  it will be important to handle
numeric data as efficiently as symbolic data;
high quality graphics is extremely important, for both the
research and application environments.

Finally, it is clear that we all would benefit greatly if unity within
the LISP community were achieved; if there are appropriate ways, we
are both willing and eager to help the community achieve that unity.

David R. Barstow
Schlumberger-Doll Research
6.April.81
-------

∂07-Apr-81  0026	ENGELMORE at USC-ISI 	Status reports, etc.   
Date: 7 Apr 1981 0008-PST
Sender: ENGELMORE at USC-ISI
Subject: Status reports, etc.
Subject: [JONL at MIT-MC (Jon L White)]
Subject: [YONKE at BBND: Interlisp-Jericho Status Report]
Subject: [JAMES at USC-ECL: Status Report]
Subject: [Barstow@SUMEX-AIM: Future LISP Environments]
From: ENGELMORE at USC-ISI
To: Kahn, Adams, 
To: Yonke at BBN, Zdybel at BBN, 
To: Wilson at CCA, 
To: Fahlman at CMU-10B, Guy.Steele at CMU-10A, 
To: Balzer at ISIB, Crocker at ISIF, 
To: JONL at MIT-MC, Moon at MIT-MC, 
To: RG at MIT-AI, CLR at MIT-XX, AV at MIT-XX, 
To: Hearn at RAND-AI, Henry at RAND-AI, 
To: Hedrick at RUTGERS, 
To: Green at SCI-ICS, 
To: Hendrix at SRI-KL, Shostak at SRI-KL, 
To: Genesereth at SU-SCORE, VanMelle at SUMEX-AIM, 
To: Feigenbaum at SU-SCORE, 
To: RWW at SU-AI, RPG at SU-AI, 
To: Fateman at BERKELEY, 
To: Griss at UTAH-20, 
To: Deutsch at PARC, Masinter at PARC, Sheil at PARC, 
To: Lee.Moore at CMU-10A, 
To: Engelman at USC-ECL
Message-ID: <[USC-ISI] 7-Apr-81 00:08:05.ENGELMORE>

Here is, probably the last installment of status reports before the
meeting.
rse
	
Begin forwarded messages
Mail-From: ARPANET host MIT-MC rcvd at 4-Apr-81 1106-PST
Date:  4 APR 1981 1406-EST
From: JONL at MIT-MC (Jon L White)
To: engelmore at USC-ISI

The following two sections are partially drawn from a paper in the 
Proceedings Of The 1979 MACSYMA Users' Conference, "NIL: A Perspective",
by Jon L White,  and from an internal status report prepared Jan 15, 1981. 
Currently associated with the project are Richard L Bryan, Robert W Kerns, 
and Jon L White.

←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←


Brief Overview of the MIT NIL project

	NIL is a "New Implementation of Lisp"; it is a modernization of LISP 
   suitable for implementation on any of the current generation, large-address
   space, standardly-available computers.   The design is intended to be the 
   best machine-indenpendent lisp possible, and thus no facilities are presumed
   which would require special purpose hardware for efficient operation (such 
   as "cdr-coding", or "bit-map raster display").  Rather, the language will 
   expose in the higher level more of the capabilities which are common to 
   nearly all of the likely target machines -- such as typical computer 
   arithmetic, string and character manipulation operations, objects with 
   indexed-access structure in addition to linked-list-access, and so on.

	Major goals are:
   a) To be maximally upwards compatible with MacLISP, and absorb enough of
	of the LISPMachine's facilities so as to enable most non-system
	programmers to run their applications in either environment (in 
	particular, the non-I/O oriented parts of programs should be 
	compatible with trivial effort on the programmer's part).
   b) To provide a rich set of primitive data types, which will reflect
	capabilities of typical off-the-shelf compters, and also to
	provide a general user-extensible mechanism for new data types
	(going beyond the capacity of ECL, for example).
   c) To build NIL in NIL, not merely as a curiosity, but to demonstrate the
	feasibililty of building an operating system in NIL itself;  and to
	make it easier for the end LISP user to tailor the system to his own
	requirements.  The non-LISP kernel, called the "VM" for Virtual 
	Machine, corresponds roughly to the micro-code complement in a special 
	purpose lisp machine.  We will build "VM"'s on several different 
	machine environments, in order to gain experience in generalizing what 
	are the basic primitives needed.
   d) To build an EMACS in NIL;  To build MACSYMA in NIL.  A very usable EMACS 
	has been built under Multics MacLISP, but suffers from an efficiency 
	problem and a proprietary interest limitation (by Honeywell).   MACSYMA
	has already begun the slow process of abstracting out much of the old
	code, during the re-building of it under Multics MacLISP and the 
	LISPMachine (and significant pieces of it under Franz Lisp).  Thus
	the overall performance of NIL must be high enough so that these
	systems can be competitively supported.	
   e) To provide "object-oriented" programming as an adjunct to standard
	lisp recursion style.  Message-passing semantics has been seen to be
	the "right" way to think about certain problems, but we feel that it
	is not a panacea, and thus its provision in NIL will not ubiquitously 
	displace standard lisp capabilities.
   f) To provide a peak-performance compiler, with modes of "partial
	compilation".  For example, in one mode, the output is a truly
	machine-independent code called LLCODE (for Linearized Lisp CODE), 
	which could be supported by micro-code (but surely won't be, since
	the VM design would be more aggressive when a real micro-code
	option is available).	Generation of actual machine instructions 
	from LLCODE permits a number of compile-time choices (see below
	under "Some compile time options ..."), but in general there will be 
	no need to do "block compilation" in order to achieve efficiency, as 
	most such options can be successfully delayed until runtime.
   g) To cooperate with similar efforts in reducing the low-level differences
	between dialects, so that a nearly-complete implementation of many 
	existing lisp systems can be done in "piggy-back" fashion on top of
	the NIL kernel and support code.
   h) To actualize the implementations on the VAX computer, on the extended-
	addressing PDP-20/80, and on the Nu machine (a personal computer
	project to be undertaken by the Laboratory for Computer Science,
	which was to be based on a MC68000 micro-processor, but which has
	been postponed due to contract failure with the initial hardware 
	supplier).  Where the operating system supports it, the compiler and
	assembler produce sharable, read-only object segment in the file
	system which the loader merely "maps" in and links up; in the VAX 
	implementation, we will have a dynamic loader also for FORTRAN object 
	files, with a relatively efficient interface between LISP and 
	FORTRAN compiled code.
   i) To provide the basis for continued development, as new ideas come forth.
	The M.I.T research environment is a continuously active one, so a major
	consideration of NIL is to provide a vehicle for experimentation with 
	all the new ideas cropping up in the worlds of AI and of Programming 
	Languages.   In particular, the data type EXTEND, which is essentially 
	a VECTOR (that is, indexed-access block of S-expressions) with a link 
	to a type-descriptor, has been a general enough mechanism for us to 
	experiment recently with several different message-passing protocols,
	since the VM design allows a slot in the type-descriptor for the user
	to provide a "message-sending" interpreter.

	We have decided upon a typed-pointer scheme rather than segmenting the
   address space into chunks of homogeneous data types; as a consequence,
   memory management calls for a "Stop-and-Copy" garbage collector rather
   than the "Mark-and-Sweep" variety.  Additionally, storage will be divided 
   into a static area and a living area; normally, the GC will only reclaim 
   addresses from the living area, but under explicit user invocation, a 
   "hyper" GC will reclaim also from the static area (such a split has been 
   found to be of paramount importance in reducing the memory management
   overhead in PDP-10 MacLISP.)   We have an very efficient method of 
   achieving the kind of limited FUNARGs known as CLOSUREs on the LISPMachine; 
   and we have provided for the future installment of a co-routine facility 
   similar to the "stack-group" of the LISPMachine.  We have rejected the 
   spaghetti-stack model of control in favor of CLOSUREs and co-routines.

	Some compile time options during the generation of actual machine 
   instructions from LLCODE are of interest to the user.  In particular,
   the choice of data access (such as CAR/CDR's, i'th character of string,
   array referencing and so on) is determined by a switch to be one of:
        e1) closed-compiled, with a "mini" subroutine call so that all
	    operations may be certified for application to proper data.
	    This option will likely offer a speed improvement over
	    interpretation of between 5 and 10.  More importantly, it
	    will allow one to experiment with variant storage methods;
	    in particular, one can try out various "real time" or
	    "incremental" garbage collection methods merely by modifying
	    the "mini" subr which implements the data access.
	e2) open-compiled, with a few instructions for data-type 
	    certification installed before the actual data access
	    This option will offer a speed improvement over the closed 
	    option of between 2 and 4 (and thus over interpretation by
	    10 to 40).  Significant pieces of semi-debugged user code
	    may want to be compiled in this way.
	e3) "fast, open-compilation", as currently happens with PDP-10
	    MacLISP;  this "speed first" option makes no guarantees about
	    what happens when an operation is applied to an inappropriate
	    datum.  This will offer a speed improvement over (e2) of
	    possibly a factor of 1.5, and will also produce significantly
	    more compact code.  This will be the option for secure, 
	    debugged code.

	A message-passing protocol is being built which will be at
   least upwards-compatible with the "Flavor" system of the LISPmachine.
   One feature of importance to NIL user is that the "inheritance
   hierarchy" is flattened at a compilation and flavor-composition time,
   so that no time-consuming search through a tree of super-classes need be 
   made at invocation time.  Quite likely, message-passing will be only
   slightly slower than functional invocation, even where there is no
   special hardware support for such operations.



Current Operational status, and future plans:

	Since early 1979, a large body of "compatible" software has been
   built up, in which the same source files serve as both NIL and MacLISP	
   system code.  Portions of this body of code have been compiled and
   run on the LISPmachine and on Multics MacLISP to demonstrate its
   machine-independent nature (the LISPMachine's own body of code development
   supplies many of these utilities, in a "compatible-in-spirit" form, and
   though there seems to be no advantage to sharing code between projects,
   there is still a degree of design cooperation).  By early 1980, a 
   "piggy-backed" version of NIL was running on top of MacLISP.  Using this 
   emulated NIL environment, we developed some VAX-specific tools -- assembler
   and compiler -- and were able in May of 1980 to demonstrate pieces of the 
   whole system running on the VAX.  By late summer of 1980, we had a "toy"
   interpreter running on the VAX, despite lingering bugs in the VM support.
   Following this section is a more extensive progress report which was
   prepared in mid-January 1981.
	A new LISPMachine manual is on the press right now, and a comprehensive
   MacLISP manual will be press ready by June 1981.   These two manuals will
   serve the initial need for a NIL manual, with a modest sized pamphlet
   detailing the differences and limitations of NIL from its "friendly
   cousins".   Also by June 1981, a pilot version of the NIL system for the
   VAX, running under the VMS operating system, will be ready for some 
   non-antagonistic testing/usage by selected sites.  During the summer, we 
   will continue to replace parts of the system which currently are serving 
   as "scaffolding" for our work, but which are not in final form.  Later this 
   year, we will begin a true NIL implementation for the extended-addressing 
   PDP-20/80  (as opposed to the "piggy-backed" version running in the 18-bit 
   address PDP-10).  Plans for bringing NIL up on a micro-processor are 
   somewhat nebulous right now, due to postponement of the Nu project, and to 
   manpower shortages. 
	NIL will be a fairly large system, approximating an operating system 
   in scope, and will itself require a large segment of memory/address-space; 
   but an important feature of the NIL design is the dynamic linking of 
   position-independent code modules from the file system, much the way 
   Multics object segments are linked up, and this will increase the mutual 
   sharabililty of all code segments.  This will be important when there are 
   several different applications, each built on top of a NIL, running on the 
   same VAX (currently, only DEC's VMS operating system permits us the 
   flexibility for the sharing of dynamically-linked code, and we are indeed 
   doing this, but the Berkeley UNIX may soon supply this capability too). 
   Because of the high degree of sharing of the read-only code-segment files,
   We expect to be able to run several (3 or 4?) simultaneous users of a large
   applications (such as MACSYMAs, NIL-based text editors, etc) on the 
   VAX 11/780;  jobs would be incrementally sharing disk pages even when 
   they were not initiated from the same "dump" file.
   

←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←←


Current Progress on the NIL System, and on the VAX Implementation of It
							Jan 15, 1981


    In May of this past year, we demonstrated certain pieces of each 
component of the system as being in operation, but as a whole it 
wasn't debugged enough to run even a toy LISP system;  in mid August, 
we demonstrated a "toy" LISP system, which had some of the rudiments 
necessary to a typical LISP (mini-reader, partial EVALuator, and 
mini-printer).  Even though this system was at a "toy" level, it
did help us find many bugs in each component of the system -- from
the compiler/assembler to the "virtual-machine" code -- and a good
deal of debugging at each level has proceeded since then.
    We are currently still cross-compiling and assembling from the NIL 
emulator systems running on the PDP10, and may be in this mode for another 
four months.  Between mid September and mid November, we re-structured
our PDP10 emulation environment, partly to help extract from it a more 
implementation-independent body of code, and partly to reduce the size 
of our "piggy-backed" system (the NIL emulator runs "on top of" the
PDP10 MacLISP system).  Since NIL intends to incorporate into its
runtime system a large measure of the MIT LISP Machine environment,
this has severely impacted the addres space available on the 18-bit
PDP10.  [The MIT LISP Machine will henceforward be referred to as
merely "LISPM"].

On a component-by-component basis, our current status is:

VM ("Virtual Machine")
   Certain facilities are presumed to exist for the runtime environment
   of NIL; with a very flexible micro-code structure, one could put most 
   of these facilities into micro-code, but currently we have no plans to
   put any of them into micro-code on the VAX.  Typical of such items are 
   data-accessing primitives which, at runtime, certify the correctness
   of the data type, primitive dispatch for the interrupt system, primitive
   i/o routines, primitive error service, and a cold-load startup routine 
   which sets up an initial stack and initial dynamic storage areas.
   The Vax NIL also contains VAX debugger, and some additional LISP-oriented
   debugging routines.  Many of these functions are called "minisubrs",
   due to the fact that when not microcoded, they can be called out-of-line
   using whatever fast, non-recursive subroutine facililty is available
   (JSB seems to be the only candidate for the VAX).  Some of the mini-
   subrs are "essential", in that the correct compilation of some piece
   of code will require them;  others are "inessential", in that they
   will be used only when the user requests, by means of a compiler
   switch, full error checking in the runtime code.
	Currently, the VM consists of a motly collection of BLISS-32 code,
   and of a macro-assembler code which must be pre-parsed and converted
   into MACRO (we do this in order to fill in what we percieve to be
   the gaps of VAX MACRO).   Although there are more than 15.K lines of
   this kind of code, one of the members of the group has been working
   on a lisp-coded data base which would permit the mechanical generation
   of most of the minisubrs and of the initial data tables;  this should
   help the conversion of this code to run under UNIX (if necessary).


ASSEMBLER/LOADER
   The NIL-assembler for the VAX, written entirely in the NIL language,
   is working fairly well since about July 1980.  It is a non-macro, 
   symbolic assembler, with a few pseudo-ops, and this is adequate for
   a "lisp" assembler.   Current faults are mainly that no Jump-optimizations 
   are being done -- all forward-branching jumps are "longword", and 
   conditional jumps are done by conditional branches around a jump -- and 
   that some PDP10 dependencies may remain.  The assembler produces a 
   byte-image file, which is then packed into a bit-stream suitable for 
   file-transfer from the PDP10 to the VAX;  one part of this file is like 
   a "program section", in that it represents position-independent, read-only
   instructions; another part is for storing LISP s-expressions, which have 
   to be "relocated" upon loading.  Since instructions do make reference to 
   this "s-expression" data, there is no way to use the linking-loaders of 
   other VAX systems.  
	The loader is written in BLISS-32, and ultimately the intention is 
   for it to page-map in the read-only pages of a file, to gain sharability 
   among multiple users, but currently it is merely inputting these pages.
   Although the i/o usage of the existing loader is VMS dependent, it
   should be possible to transcribe it fairly straightforwardly into
   any other VAX operating system (such as UNIX).  Hopefully, someday,
   we would like to do what the group at Xerox's Palo Alto Research
   Center have done -- write the loader in NIL itself, and bootstrap
   a "cold-load" system by running in a "shadow" mode of the NIL
   emulator (which of course would be running on the PDP10).

COMPILER(s):
   By May 1980, a primitive versison of an incremental NIL compiler
   had been coded;  it is incremental in that it first produces a
   generally machine-independent code called LLCODE (acronymic for
   Linearized-Lisp-CODE), which is somwhat akin to the basic output
   of the LISPM compiler.  LLCODE is then converted into LAP
   (the machine-language for the lisp assembler mentioned above)
   for the VAX by an independent module which is much like a second
   compilation phase (hence the appelation "incremental compiler").
   By mid-August, this compiler had been completed for the full
   NIL language, with some work left to do on the optimization of
   registers during the generation-of-LAP phase.  
      One member of the project began, in early April 1980, the conversion of 
   a partial NIL compiler built for the S1;  this compiler was experimental 
   and uses a strategy of compilation outlined in Guy Steele's master's
   thesis;  another member of the project joined this effort in mid May,
   and by mid August, a subset of the NIL language could reliably be
   compiled for the VAX.  The particular optimization strategies of this,
   compiler, while interesting from a theoretical point of view,  quite 
   likely have very little, if any, payoff on the VAX (since it has a fairly 
   large memory cache), and under the NIL function-calling sequence (since it 
   uses the VAX CALLS instruction.  Consequently, interest in furthering 
   this compiler has waned; but out of its "ashes", one group member,
   by mid-December, had distilled a slightly-more-comprehensive but very 
   simple, stack-machine compiler which he is using as an working tool.


EVALUATOR
   The "toy" interpreter mentioned above was unduly restricted due to
   bug in the "Virtual-Machine", especially the failure of the module
   which does the dynamic function-call transfer (i.e. given the symbolic
   name of a function at runtime, create a subroutine call to it, with
   the arguments put into a stackframe).  As many of these bugs have been
   caught and corrected this fall, a more comprehensive evaluator has been
   tested out now (Jan 1981).   By June 1981, an evaluator of the
   general capability of MacLISP will be available -- but of course all
   written in NIL itself.  Our future plans call for an evaluator which
   has a fast, lexical-variable scheme;  the structure of this idea has
   been worked out by one of us, but it is not of paramount practical
   importance now (it will follow some of the ideas of the SCHEME language
   designed by Gerry Sussman and Guy Steele).

RUN TIME ENVIRONMENT
   The NIL "environment" is much like a subroutine library, along with
   a mixture of interpretive/compiler-oriented aids.  A large part of
   this environment is inspired by the LISPM's environment, and is 
   sufficiently rich that most programs written for the LISPM will run 
   under NIL (major exceptions are, of course, I/O specialties
   the the newer, experimental parts of the CLASS system).  Fortunately,
   this library is about 90% written in a compatible subset of NIL
   so that it forms the basis of the NIL emulator on the PDP10 -- thus
   it has been possible to develop extensively and debug this code
   on the PDP10.  It currently consists of just about 10.K lines of
   rather dense NIL code, which is source-compatible for use either by a 
   native NIL (such as on the VAX), by MacLISP on the PDP10, by the LISPM,
   and (eventually) by MacLISP on Multics.


LISP DEBUGGING AIDS
   Very little has been coded as of Jan 1981.  Plans call for a stack-
   analyzing tool similar to that available in MacLISP and INTERLISP;
   Each compiled function will eventually also contain a "road-map"
   so that variables which are stack-allocated by the compiler may
   be referenced by their original symbolic names.
[note: by March 1981, a fairly impressive amount of one of the MacLISP 
 stack debugging tools had been brought up on the VAX;  nothing has been
 coded for the "road-maps" however.]

NILE
   An EMACS editor, written entirely in NIL, has been developed by
   an MIT undergraduate student, who also did a significant amount of
   work on the Multics EMACS (written in Multics MacLISP).  This
   system has been running, after a fashion, since December 1980, under
   the NIL emulator on the PDP10.  Possibly new problems will arise
   when it is brought up on the VAX, since a major feature of it is
   its interactive use of a CRT screen.


CLASS SYSTEM
   Perhaps the most exciting, innovative idea in programming languages
   now is that of "object-oriented programming";  a first-version of
   this technology was implemented for the piggy-backed NIL in the
   spring of 1980, and major improvements were done during November
   and December 1980.  Support for "message passing", a fundamental
   operation in such systems, has been provided in the VAX VM, and
   in the compiler, but more work is needed to get the most recent
   system to be free of PDP10 dependencies -- hopefully this will be
   done by the end of March 1981.  Our goal is to be at least modestly
   compatible with the kinds of usages available on the LISPM,
   and this will no doubt mean continuing development, since this
   whole area of endeavor is somewhat new.   
	A record-structure facility, called DEFVST, has been completed 
   which depends (partially) on having a CLASS system available -- by
   connecting each structured object into a "class", or "sub-class"
   of objects, there is full extensibility at runtime.  DEFVST is
   entirely written in NIL -- only the low-level "class" support must
   be provided by the compiler and VM.




          --------------------
Mail-From: ARPANET host BBND rcvd at 5-Apr-81 1338-PST
Date: 5 Apr 1981 1637-EST
From: YONKE at BBND
To: Engelmore at USC-ISI
Subject: Interlisp-Jericho Status Report
Message-ID: <[BBND] 5-Apr-81 16:37:22.YONKE>
Sender: YONKE at BBND

               Status Report on Interlisp-Jericho

1.  Project Description.

The Interlisp-Jericho project at BBN has the same motivations and
goals as Xerox's Interlisp-D project.  Interlisp-Jericho differs
from Interlisp-D only in its underlying implementation -- the
hardware and macro-instruction set.  At the hardware level, our
goal was a powerful personal computer with high-resolution bitmap
display capability.  After a survey of available hardware (in
1979) it was decided the best approach was to build our own
machines -- hence Jericho.  Interlisp-Jericho is an upward
compatible implementation of Interlisp-10.  Also along with
Interlisp-D, Interlisp-Jericho is designed to raise the
communication medium with the user from a "TTY" (ala
Interlisp-10) to sophisticated IO devices.

2.  Distinguishing Features.

Interlisp-Jericho is implemented entirely in Interlisp and
micro-code (i.e.  there is no "system implementation language").
It has 32 bit typed pointers (6 bits of type -- including user
data types, 2 bits for CAR/CDR coding, and 24 bits of address).
There are three types of numbers:  64 bit integers, 64 bit
floating point, and 24 bit immediate integers (SMALLP).  It is
shallow bound and has a complete implementation of spaghetti
stacks.

3.  Interlisp-Jericho Status and Hardware.

Interlisp-Jericho currently is running substantially the entire
Interlisp-10 programming environment (minus those features
explicitly stated in the Interlisp-10 manual as dependent on
TENEX/TOPS20, although we have in many cases simulated these).
We are currently involved in three major efforts:  (1) developing
user-interfaces to the advanced IO hardware, (2) eliminating or
mediating discovered incompatibilities with Interlisp-10, and (3)
improving the performance Interlisp-Jericho now that it is
operational (e.g.  improving the paging algorithm).  We currently
do not have a garbage collector, but plan to implement one in the
near future.

Jericho is a 32 bit micro-programmable computer with a 200MB
local disk and a local network connection.  Physically, it is
roughly the size and shape of a large two-drawer filing cabinet,
except that it is half again as wide.  Jerichos can have from
128K to 4M 32 bit words of physical memory.  Although there are
24 bits allocated for addresses, the current hardware will only
address 22 bits of 32 bit words.  This is a temporary limitation
due to the availability of chips for the pager -- the design
allows the full 24 bit address space and individual machines can
be easily upgraded.  Bitmap memory, which is part of the virtual
address space, can be from 1 to 9 1024x1024 bitmaps.  These can
be split up among black and white displays and/or grouped for
gray-scale or (via a color-lookup table) color displays, as
desired.  Jericho includes four independent micro-process
controllers.  These are dedicated to the control of the local
disk, the network connection, low-speed devices such as the mouse
and keyboard, and the user process.  Besides multiple bitmaps,
Jericho includes audio output that has already been used for
digitized speech and digitally synthesized music.

4.  Further Development.

As stated previously, we are currently "fine-tuning" the system
and developing advanced IO capabilities (estimated completion of
these efforts is summer of 81).  Extensions to the Interlisp
language are anticipated, but not without coordination with other
Interlisp projects such as Interlisp-D and Interlisp-VAX.
Interlisp-10 has been under cooperative development between Xerox
and BBN for many years, and we hope and expect that this
philosophy of cooperation will continue into the future among all
implementors of Interlisp.  Our specific interests for extending
Interlisp include efficient facilities for object-oriented
programming, implementation of "name spaces" or some other
treatment of the name collision problem, and providing more
powerful LAMBDA types.

5.  Comment.

The Interlisp-Jericho project at BBN started much later than the
Interlisp-D project at Xerox.  Our hardware, virtual machine and
compiler differ from theirs, but nevertheless we benefited
considerably from their trail-blazing, and are much indebted for
their cooperation.

          --------------------
Mail-From: ARPANET host USC-ECL rcvd at 6-Apr-81 1216-PST
Date: 6 APR 1981 1215-PST
From: JAMES at USC-ECL
To: Engelmore at ISI
Subject: Status Report
Message-ID: <[USC-ECL] 6-APR-81 12:15:21.JAMES>
Sender: JAMES at USC-ECL

	Systems Cognition Corp. (SCC) Status Report on SCC LISP



1- Describe your project

        The  intent  of  our  project  is to provide a version of
Interlisp called SCC LISP that runs on the LMI LISP Machine.  SCC
LISP  will  be  an  augmented  version  of  the  main features of
Interlisp-10;  it  will  be  layered  on  top  of  the   existing
environment  of  the  LISP  Machine in order to leave the present
LISP Machine environment undisturbed.  By adding the new  version
of  Interlisp  in  this  manner,  we allow the user to program in
either SCC LISP or in LISP Machine LISP or in  many  cases  in  a
combination of the two dialects.

        The  principle departure of SCC LISP from Interlisp is in
the way it handles the stack.  SCC  LISP  will  not  support  the
spaghetti stack and its general stack accessing functions will be
incompatible with those of Interlisp.  SCC  LISP  will,  however,
provide  a  capability for accessing the stack that is equivalent
in power to that of Interlisp (exclusive of the spaghetti stack).


2- What are the distinguishing features of your  language  and/or
programming environment?

        a   -  All  of  the  already  existing  hardware/software
features of the ~MIT LISP Machine  (Please  see  the  MIT  status
report) will be preserved.

        b  -  Most  of  the  features of Interlisp including such
facilities as: CLISP, DWIM, Programmer's Assistant,  Masterscope,
Record Package, structure editor, etc.  will be provided.

            We  plan  to  implement  these  facilities  using the
existing code of Interlisp-10.

        c  -  The  various  packages  of  Interlisp  (E.g.,   the
structure  editor,  Programmer's  Assistant)  will be extented to
make use of  the  additional  facilities  provided  by  the  LISP
Machine, e.g.  its Window System.

        d  -  Because  SCC  LISP  is  just  layered upon the LISP
Machine software, intermixing of SCC LISP with LISP Machine  LISP
will be possible in most instances.


3- Is your system operational?  If yes, on what hardware?  If no,
when do you expect to be operational, and on what?

        Construction of SCC LISP has begun recently.  The portion
built  so  far  runs  on  the  LMI LISP Machine in Cambridge.  It
includes the following components:

        a - A  fairly  extensive  translator  system  written  in
Interlisp that is being used to translate various portions of the
Interlisp-10 code to run on the LISP Machine.

        b -  Interlisp's  string  handling  functions  have  been
implemented.

        c  -  Interlisp's arithmetic functions have been designed
and are awaiting to be debugged.

        d - Interlisp's array  functions  have  been  implemented
with the exception of swap arrays.

        e  - The LISP Machine's EVAL routine has been extented to
support Interlisp's CLISP array feature.

        f - EVAL, APPLY, APPLY*, COND,  PROG,  PROGN,  SETQ,  et.
al.   have  been  extended  to support many of the Interlisp-like
types of eval-blips.  Our intention is to have the eval-blips  of
SCC  LISP  to  be  as  close  to Interlisp's as possible.  But we
expect that some differences will be unavoidable.   Nevertheless,
we  will  be  able  to  provide  the  user  with equally powerful
features.

        g - The LISP Machine's error handler has been extended to
include  a new mode of error handling.  This new mode is designed
to encapsulate run-time errors in an environment that is suitable
for  Interlisp's  DWIM to operate in.  Primarily, these additions
have been in providing the LISP Machine with a  FAULTEVAL  and  a
FAULTAPPLY.

        h  - Interlisp's structure editor has been implemented on
the LISP Machine by the process mentioned in paragraph A,  above.
The  editor  has been extended to be display oriented and to make
use of the mouse for selection of  lists,  atoms,  and  tails  of
expressions displayed on the screen.

        i  -  A local, tenex-like, file system implemented on top
of an already existing MIT file system is under construction.


4- What are your present plans for further development?   Include
estimated milestone dates, if possible:

        The  SCC LISP Project is presently in the planning stage;
the feasibility of the project and the merits of the  design  are
being  explored.   Presently  plans  call  for  the project to be
completed  within  on  year   of   its   formal   inception.    A
demonstratable  version  of  the entire system would be available
approximately  two-thirds  of  the  way  through   the   project.
Initiation  of the project, in part, awaits further consideration
of the merits of the project by the Interlisp community of users.

        The  project  is  organized  to  provide  a  sequence  of
versions   of  SCC  LISP  each  of  which  will  be  increasingly
compatible with existing Interlisp programs.  Therefore, progress
of  the work can be readily measured throughout the course of the
project.

          --------------------
Mail-From: ARPANET host SUMEX-AIM rcvd at 6-Apr-81 2304-PST
Date:  6 Apr 1981 1911-PST
From: Barstow@SUMEX-AIM
To:   englemore@ISI
Cc:   barstow@SUMEX-AIM, gabriel@SU-AI
Subject: Future LISP Environments

Bob-

The following is a brief statement about Schlumberger-Doll's
hopes for LISP inthe future.  It is being typed on a terminal
with a flakey space bar, so forgive the typos.  Of course,
were we on the net, this would all be much easier...

Schlumberger-Doll Research faces the same dilemma as the rest
of the LISPcommunity.  The numberof options for LISP programming
and computing envirnments is growing; the trade-offs are
difficult to ass ess, but we cannot wait too long before choosing
the direction(s) on which to concentrate.

From our perspective, it is important to recognize the existence of
two separate environments.  The research environment is primaril
concerned with developing experimental software:  the important
aspects of the environment are flexibility, ease of debugging,
and facilities forproducing clear source code.  The application
environment is primarily concerned with developing and running
production software:  the important aspects of the environment
are the efficiency and robustness of the compiled code.
Our major activities are currently in a research environment,
but we expect eventually to be running AI programs at a large
number of application sites (several dozen field interpretation
centers, perhaps a thousand logging trucks).An important characteristic 
of these sites is that there is not
a resident LISP wizard; the programs must be extremely robust even
when run by naive users.  In our case, for the forseeable future
atleast, the application environment will certainly be based on t he VAX,
probably running under VMS.  We would obviously prefer it if the research
environment were the same, but there must at least be a way for a program
to make a smooth transition f romone to the other (e.g., source code
compatibility).

We do not feel particularly strongly about which of the
alternative L~ISP environments we eventaully use.  We have both
INTERLISP and MACLISP users, generally reflecting their previous
experiences more than major technical issues.  On two specific points,
however, we have strong opinions:  it will be important to handle
numeric data as efficiently as symbolic data;
high quality graphics is extremely important, for both the
research and application environments.

Finally, it is clear that we all would benefit greatly if unity within
the LISP community were achieved; if there are appropriate ways, we
are both willing and eager to help the community achieve that unity.

David R. Barstow
Schlumberger-Doll Research
6.April.81
-------

          --------------------
End forwarded messages