26-Oct-86 23:15:23-PST,5306;000000000000
Mail-From: NEUMANN created at 26-Oct-86 23:13:39
Date: Sun 26 Oct 86 23:13:39-PST
From: RISKS FORUM    (Peter G. Neumann -- Coordinator) <[email protected]>
Subject: RISKS-3.87 DIGEST
Sender: [email protected]
To: [email protected]

RISKS-LIST: RISKS-FORUM Digest,  Sunday, 26 October 1986  Volume 3 : Issue 87

          FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
 System Overload (Mike McLaughlin)
 Information Overload (Mike McLaughlin)
 SDI assumptions (Herb Lin)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome.
(Contributions to [email protected], Requests to [email protected])
 (Back issues Vol i Issue j available in CSL.SRI.COM:<RISKS>RISKS-i.j.
 Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: Sun, 26 Oct 86 21:13:56 est
From: mikemcl@nrl-csr (Mike McLaughlin)
To: risks@csl
Subject: System Overload

Back in Systems 001 I was taught that an overloaded system, be it a reactor
control or SDI, failed due to overload in the following manners:
       1.  Sacrificed quality of work.
       2.  Sacrificed throughput rate.
       3.  Failed catastrophically (crashed).
       4.  Any combination of the above.
Can a given system be designed to fail in a _chosen_ manner, so that it does
not crash - i.e. "graceful degradation."  Of course.  I see no reason why new
systems cannot do the same - at least in regard to the overload portion of the
problem. - [email protected]

------------------------------

Date: Sun, 26 Oct 86 21:39:26 est
From: mikemcl@nrl-csr (Mike McLaughlin)
To: risks@csl
Subject: Information Overload

Undoubtedly we can load sensors on a system until it will no longer fly,
move, fight, or whatever due to the number of sensors.  Airplane cockpits
already provide more information than pilots can handle.  Combat sensor
systems provide more data than battle-managers can handle.  On the early
space flights we even instrumented the astronauts themselves -- in a manner
that should not be discussed on a family forum.  There seems little point
in providing a cockpit display of the pilot's rectal temperature; but on the
ground someone cared.

One of the functions being performed by computers today is to filter the
information, so that the system operator sees relevant data.  One of the
tough parts is to decide what is relevant.  I submit that "operator
assistant" computers deserve special care in design and testing.  They seem
to be used where lives are at stake, and where data is available.  Relying
on the computer to decide what is "relevant" in a given situation is fraught
with risk.  Relying on a human to decide in advance of the situation is not
much better.

Another area of concern is the "transition" problem discussed in previous
issues.  I don't know that Navy Propulsion reactors are under-computerized
deliberately, accidentally or at all.  Having been a watch officer in the
Navy and having lived through a number of unexpected emergencies I can
personally attest to the seriousness of the "transition" problem - even
without computers.  To be awakened from sleep with alarm bells ringing and
bullhorns blaring "FIRE, FIRE, FIRE IN NUMBER TWO MAGAZINE!" - and then be
standing dressed, over the magazine, and in charge of the situation in less
than 60 seconds is quite an experience.  That I am here to recognize the
problem is due to excellent train- ing of the entire crew, not to any
specific actions on my part.  Frankly, I just "went automatic" and shook
after it was over, not during.  I suspect that any pilot, truck driver,
policeman, etc. could tell a dozen similar tales.

I'm not proposing any answers - except for extreme care.

       - [email protected]

------------------------------

Date: Sun, 26 Oct 1986  23:48 EST
From: [email protected]
To:   [email protected] (Daniel M. Frank)
Cc:   [email protected]
Subject: SDI assumptions

   From: prairie!dan at rsch.wisc.edu (Daniel M. Frank)

   Much of the concern over "perfection" in SDI seems to revolve around
   this model (aside from the legitimate observation that there is no such
   thing as a leakproof defense).

I've said it before, but it bears repeating; no critic has ever said
SDI software must be perfect.  The only ones who say this are the
pro-SDI people who are criticizing the critics.

   The [SDI] dialogue would be better served by agreeing on a model, or set
   of models, and debating the feasability of software systems for
   implementing them.

Having a "set of models" means that those models share certain
characteristics.  There is one major characteristic that all SDI
software will share: we will never be able to test SDI software --
whatever its precise nature -- under realistic conditions.  Then the
relevant question is "What can we infer about software that cannot be
tested under realistic conditions?"

------------------------------

End of RISKS-FORUM Digest
************************
-------