Aucbvax.5784
fa.works
utcsrgv!utzoo!decvax!ucbvax!works
Thu Jan 14 06:39:33 1982
WorkS Digest V2 #5
>From JSOL@USC-ECLB Thu Jan 14 02:23:15 1982
Works Digest            Thursday, 14 Jan 1982       Volume 2 : Issue 5

Today's Topics:             Administrivia
                 Query Answer - Why Virtual Memory
                       What Is A WorkStation
----------------------------------------------------------------------

Date: 13 January 1982 21:04-PST
From: The Moderator <JSOL AT RUTGERS>
Subject: Administrivia - Problems (hopefully) straightened out

At this point, you should all be up to Volume 2, Issue 5. The mail
problems of the past few days should be straightened out. At the very
least, the people responsible for the machine which distributes the
digest have more control over the distribution. This should mean
better service for all subscribers.

This also means that I want to hear about every little nit picking
problem that happens which could be linked to me or this digest. Send
any bugs or gripes to WorkS-Request@USC-ECLB. Thanks.

Enjoy,
JSol

------------------------------

Date: 12 Jan 1982 22:44:33-PST
From: pratt@Shasta at Sumex-Aim
Subject: asc answered

To answer another asc question:

Why virtual memory?  The main fact supporting the need for VM is:

       While available physical memory per machine continues to grow,
       programs and their data are both managing to keep ahead in
       size.

If this fact doesn't hold for  your particular system, you don't need
VM.  You are either lucky (maybe you only use your computer to search
for large primes)  or software-poor.   For many if  not most computer
users the above is an inescapable fact.  For them, if anything the gap
is growing.

This fact alone does not lead inevitably to VM.  Programs can always
be broken up into overlays.  Data can always be maintained on files
and the relevant portions read and written when needed.  However with
a simple main memory one pays one of two prices to get at more data
than the memory will hold: either program structure or program
performance suffers.

Program structure suffers when you have to complicate a simple
algorithm by inserting code to read and write files, and to translate
between internal and external representations as data moves between
primary and secondary memory.  Programmers get so used to this mode of
operation that they tend not to notice the loss of structure in their
programs.  However it can be made quite apparent by rewriting the code
pretending that the whole universe lives in primary memory.

You can restore your program structure by treating all references as
composite references to a file and an object in that file, and
interpreting those references with routines that decide when to swap
subroutines and data in and out.  Done right, the only place where the
primary-secondary memory distinction shows up is in the interpreter of
the references; your program itself need no longer make the
distinction.

The price for this approach is performance; all references triple or
worse in cost.  Performance-wise, you are much better off sticking to
your original ill-structured program.  Few programmers attach more
importance to structure than to performance .

The role of VM is to restore program structure without the performance
price of the interpretive solution.  With VM, references cost about
the same as with a simple memory scheme for references to objects
already in main memory.  Page faults cost, but the idea is that any
other scheme may well have to go to secondary memory approximately as
often as a VM scheme.  The big saving then is clean program structure
at little or no performance cost.

The program-structure problem is particularly visible in a language
that encourages one or another form of structure in one's programs,
e.g. Lisp and APL.  A simple recursive Lisp routine can be quite
mangled by a need to move some of its data between primary and
secondary storage.  And an APL one-liner can turn into a two-pager if
secondary storage is needed.  For such languages VM becomes
particularly vital.

If you bring your old file-manipulating programming style with you to
a VM environment you will naturally ask "Why do I need VM?"  You
don't, if you stick to that style.  The reason the Sun project can
continue without VM is that all of the code we inherit, most
Unix-style C code, is written for manual overlays and heavy file
manipulation.  While we stick to that style we can put off VM day.  We
would like to be able to get away from that style however, both to
improve the style of our low-level programs (C, Pascal) and to be able
to write Lisp and APL programs in something better than a Fortran
style laced with file operations.

                                       Vaughan Pratt

------------------------------

Date: 13 January 1982 1219-EST (Wednesday)
From: Hank Walker at CMU-10A
Subject:  virtual memory, workstation, big chip

For those that think that you don't need virtual memory when you have
lots of physical memory, read Peter Denning's editorial in this
month's CACM.  Any architect will say that you need it, and that it
doesn't cost that much to implement (at least not VAX-style virtual
memory).

The question of what is a workstation reminds me of the what is a
minicomputer discussion.  In the beginning, a minicomputer was defined
by its size, price, and computational power.  It was soon realized
that since speed changes over time, the defining factors are size and
price, not performance.  A minicomputer is probably anything in the
$5,000-$250,000 small box to VAX-size range.  Similarly, a workstation
is defined by it's basic features, such as keyboard, network
interface, graphics, disk, or whatever, and it's basic price, such as
$10,000-$30,000.  It is not defined by any performance measures except
for minimal ones.  Available software might also define a workstation,
although this too changes radically over time.

For those that believe that someone is going to announce a 2 megabit
RAM next year, I suggest that you read the literature such as the IEEE
Journal of Solid-State Circuits or the IEEE Journal of Electron
Devices.  The largest reported chip to date was a nonworking 1 megabit
Japanese chip.  The largest reported working RAMs are 256K RAMs, which
have been reported by several sources.  In other words, there is no
way that anyone is going to announce a 2 megabit RAM for sale for
quite some time, if only because they can make much more money selling
smaller chips.  A 2 megabit RAM will be announced by 1985 at the
earliest, except possibly for laboratory experiments.

Now if you were talking about a 2 megabit bubble chip.

P. S.  Rumors seen in BYTE that I know something about are usually not
very accurate, but do contain some grain of truth.

------------------------------
Date: Wednesday, 13 January 1982, 18:19-EST
From: Daniel L. Weinreb <DLW AT MIT-AI>
Subject: OK, What is a WorkS Digest?

It seems backwards for us to all join a Workstations mailing list and
then all ask "What is a Workstation?".  I think a more profitable
question might be, "What is it that we all want to talk about?".  I'd
be happy to just say that we want to discuss single-user computers
intended for interactive use, with interesting user interfaces.

[As Moderator, I would be disappointed if the WorkS mailbox became
filled with endless reams of people trying to figure out what they
want to talk about. WorkS has a very defined list of topics, and
answering the question "What is a WorkStation" is definitely one of
them. In fact the creation of this list was in part to come up with
some clear cut ideas of just what a WorkStation is. -JSol]

As to whether you need virtual memory: well, at least SOME personal
computers require some or all of large address spaces, large amounts
of memory, and/or uniform addressing.  Bob Frankston's recent mail
described these issues.  If everything you're doing fits into an
affordable amount of RAM, then it it wasteful to use mapping for
demand paging.  Lisp Machines need a WHOLE LOT of memory, and we
cannot possibly have that much RAM on the machines; our need for
demand paging is clear to us.  Certainly you can get some worthwhile
things done in far smaller computers, and I think that 68000's with
lots of main memory and no demand paging will be quite useful for many
applications.  A 68000 with a lot of main memory would be fine for
running VisiCalc; the existing computers are distinctly too small for
many real-world applications to fit comfortably.

------------------------------

Date: 13 Jan 1982 1910-CST
From: CS.APPLEWHITE at UTEXAS-20
Subject: Workstation rationale and virtual memory

I don't propose to give a precise definition of 'workstation' (I don't
think a useful one  exists), but I will describe @i<WHY> and @i<WHAT>
I want in a workstation.  I think my basic desires have been described
in this digest before.  I want the basic power of a reasonable large
machine (say a VAX) on hand so that I can build up a support
environment *I* like.

There isn't anything qualitatively different a workstation can offer
that isn't available on a conventional machine.  What is  different is
the amount of control I have over the workstation's resources.  For
example, I have a set of experimental VLSI programs which will eat up
hours of cpu time at the drop of a hat.  Given that someone has to pay
for machine resources, the workstation is  cheaper than say, the 2060
I'm editing this note on.

Virtual Memory :  There is never enough main memory.  No matter how
much I  have, my ambitions soon exceed it.  For example,  my main
interest for a workstation is VLSI design, and I have developed a set
of programs to do this.  Unfortunately, the size of my database is
quite large compared to main memory, and a lots of cycles  are needed
to massage it. It's counterproductive (and expensive)  to spend time
splitting it up.  Access to a large number of objects simply requires
a large number of address bits. I can do all of this on a big machine,
but it's more expensive and (wall clock) probably  slower.

For the sake of argument, assume that physical memory is large enough.
With virtual memory, my application will run on any workstation I have
access to; without it, the application will run only on machines
configured with enough memory.  The point is that virtual memory
separates the application from details of the machine configuration.


-Hugh

------------------------------

End of WorkS Digest
*******************
-------

-----------------------------------------------------------------
gopher://quux.org/ conversion by John Goerzen <[email protected]>
of http://communication.ucsd.edu/A-News/


This Usenet Oldnews Archive
article may be copied and distributed freely, provided:

1. There is no money collected for the text(s) of the articles.

2. The following notice remains appended to each copy:

The Usenet Oldnews Archive: Compilation Copyright (C) 1981, 1996
Bruce Jones, Henry Spencer, David Wiseman.