12-Apr-86 18:32:47-PST,16682;000000000000
Mail-From: NEUMANN created at 12-Apr-86 18:30:55
Date: Sat 12 Apr 86 18:30:55-PST
From: RISKS FORUM    (Peter G. Neumann, Coordinator) <[email protected]>
Subject: RISKS-2.40
Sender: [email protected]
To: [email protected]

RISKS-LIST: RISKS-FORUM Digest,  Saturday, 12 Apr 1986  Volume 2 : Issue 40

          FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
 GREAT BREAKTHROUGHS [Red Herrings swimming upstream?] (Dave Parnas)
 Military battle software ["first use", "works"]
   (James M Galvin, Herb Lin, Scott E. Preece, Dave Benson)
 First use - Enterprise (Lindsay F. Marshall)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome.
(Contributions to [email protected], Requests to [email protected].)
(Back issues Vol i Issue j stored in SRI-CSL:<RISKS>RISKS-i.j.  Vol 1: MAXj=45)

----------------------------------------------------------------------

Date: Fri, 11 Apr 86 07:34:38 pst
From: Neumann@SRI-CSL (Peter Neumann)
Subject: GREAT BREAKTHROUGHS [Red Herrings swimming upstream?]
To: RISKS

In this issue of RISKS, we include a commentary on the article by Fossedal,
contributed to me privately by Dave Parnas, reproduced with his permission.

  >In an article in the Sunday San Diego Union, Gregory Fossedal (Copley
  >News Service) discusses the "rapid advance of SDI."....  He then goes on to
  >discuss progress by software engineers, and says that "concepts in
  >computer software ... have leaped ahead."  He indicates that critical
  >arguments "...that 'a single error' could cripple the whole shield apply
  >only to outmoded types of unwieldy, highly centralized software.  Thanks
  >to new software ideas, Star Wars defenses need not be run by a grand
  >central brain."

Message from Dave Parnas follows:

       One of the more amazing aspects of this report is that no plan
  ever called for the defenses to be run by a "grand central brain".  If
  you read the unclassified volume of the Fletcher report, you will find
  a proposal for a highly decentralized distributed system.  The Fletcher
  panel worried about the survivability of the system and proposed a
  system in which each battle station could function on its own if others
  were destroyed.  They even rejected a military-like hierarchical
  command structure for the computers so that there would be no "Achilles
  Heel" in the system.  Nothing that I have read ever proposed a centralized
  system.

       When the SDIO Panel on Computing in Support of Battle Management
  (PCSBM) announced that people were assuming a highly centralized system
  as per the Fletcher report they were using a classic political technique,
  the "red herring".  The Fletcher panel was not anywhere near as stupid
  as they implied.  I have not seen the contractor designs but I cannot
  believe that they were as stupid as was suggested either.

       Some of the newspaper reports on the PCSBM red herring suggest that
  there is a proposal to build a network in which the battle stations remain
  autonomous by having no communication.  That is simply not the case.  Every
  report that I have seen calls for extensive communication between those
  stations.  Weapon Stations that were denied the use of data obtained by
  other satellites would be severely handicapped and more easily defeated.

       Fossedal's reference to "a single error" is part of another red
  herring in which SDIO supporters claim that the critics want perfection.
  The only reference to "error free software" came from SDI supporters,
  none of the critics have assumed that perfection was needed.  You only
  have to get rid of the errors that matter.  Some claim this as a new
  discovery as well.

       When Fossedal reports such great progress, it is progress from a
  position that was never held by any responsible computer system designer.

[End of message from Dave Parnas]

------------------------------

To: "Scott E. Preece" <preece%[email protected]>
cc: [email protected]
Subject: Military battle software
Date: Thu, 10 Apr 86 15:52:59 -0500
From: James M Galvin <[email protected]>

> From:   preece%ccvaxa@gswd-vms (Scott E. Preece)
> Date:   Mon, 07 Apr 86 09:43:05 -0600.
>
> There are two essential, undefined terms in this statement: "first use"
> and "worked". ...

What about your essential, undefined phrase "convinced that it works"?
In the context of your argument I assume you are being facetious, but it
is not clear.  I will agree with you if what you are saying is that
"convinced that it works" is really just a "small probability of failure".
True, I trust my life to my car every day, but who's to say that someday
the steering column won't fail.

The next question is how small a probability is desired and how is it
achieved?  Isn't that an essential component of Parnas' argument?

Jim

------------------------------

Date: Sat, 12 Apr 86 14:39:15 EST
From: Herb Lin <[email protected]>
To: benson%[email protected]
cc: [email protected], [email protected]

    | From: preece%ccvaxa@gswd-vms (Scott E. Preece) [...]
    | There are two essential, undefined terms in this statement: "first use"
    | and "worked".

Actually, the meaning of first use for a missile defense system is
pretty clear -- it means the first time the Soviets launch an attack
on the U.S.

    | The question of how perfectly it has to work is the central one.

Not true.  The central question is how well you can know its
performance before it is called into action.

    | If you build the thing, you don't trust your security to it until
    | you have been damned well convinced that it works...

What would you consider sufficient to convince you that it "works"?
What evidence of "working" should the nation accept as "proof" that it
works?  If there is no evidence short of an ensenble of nuclear wars,
then it is a meaningful statement to say that "you will never know".

------------------------------

Date: 11 Apr 1986 08:58-CST
From: preece%mycroft@gswd-vms (Scott E. Preece)
Subject: Information about military battle software
To: galvin%dewey.udel@gswd-vms
Cc: [email protected]

> The next question is how small a probability is desired and how is it
> achieved?  Isn't that an essential component of Parnas' argument?

Yes, I think that's the essential question.  I think Parnas is saying that
you can never prove adequately that the probability is sufficiently small,
so you might as well not work on the question.

I wear my seatbelt BECAUSE there is always a probability that my steering
will fail or the wetware guiding some other vehicle will fail.  I know there
is also a small probability of the seatbelt failing, too, but there the risk
is low enough for me to accept.  If I could have airbags in a car I could
afford, I would.

I don't know if it is possible to build software systems capable of dealing
with the problems inherent in SDI.  I don't know what level of testing and
verification would be necessary to convince me that the software (and the
hardware) worked.  I think Parnas is saying that it IS impossible to do and
that NO proof could be sufficient.  I think that's wrong headed.

There are perfectly good arguments against going ahead with SDI --
destabilization is sufficient in itself, cost and the false sense of
security are also strong arguments.  Short range submarine-based missiles,
cruise missiles, and emplaced weapons are further arguments.

I think the Parnas arguments are tangential and misleading.  He creates a
situation where every time someone says "But look at system X; it worked
fine when it became operational" it becomes an argument for the pro-SDI side.
Somebody (Asimov? Clarke?) has said "Whenever a very senior scientist says
something is impossible, the odds are he's wrong."  That's the way I react
automatically to Parnas's arguments.  I think a lot of other people do, too.

scott preece   [gould/csd - urbana]
  uucp: ihnp4!uiucdcs!ccvaxa!preece

------------------------------

Date: Wed, 9 Apr 86 23:56:33 pst
From: Dave Benson <benson%[email protected]>
To: risks%[email protected]
Subject:  Preece's msg, first-time software, and SDI

To keep the thread of the discussion, I quote liberally from Preece's
msg to RISKS and comment on certain sections:

|Date: Mon, 7 Apr 86 09:43:05 cst
|From: preece%ccvaxa@gswd-vms (Scott E. Preece)
|Subject: Request for information about military battle software
|> [Parnas, quoted by Dave Benson]

Correction.  This is from a report of a talk by Parnas.  I believe it
correctly represents Parnas' views, but may not be a quotation. I did not
have the opportunity to listen to the talk.  Pullman is 300 airmiles from
Seattle. The full report appeared on the ARMS-D bboard.

|> The other members of the SDI advisory panel that David Parnas was on
|> and other public figures have said "Why are you so pessimistic?  You
|> don't have any hard figures to back up your claims."  Parnas agreed
|> that he didn't have any until he thought of the only one that he
|> needed: ZERO...
|
|There are two essential, undefined terms in this statement: "first use"
|and "worked".  The shuttle Enterprise, for instance, worked the first
|time they dropped it from its carrier 747.  Was that its "first use", or
|do you count the many hours of simulation preceding that first flight?
|I wasn't there and have no idea whether there were bugs that showed up,
|but they clearly didn't keep the test from succeeding.  Is that "working"?

My interpretation:  The simulation preceeding the first flight is not the
"first use" I had in mind.  The first operational use of real-time control
software is.  So your example is a good illustration of the working of
first-use real-time control software with humans (pilots and ground
personnel) in attendence.  In the minimum sense that the Enterprise was
piloted to a landing, the test was indeed a success.  (It may have been a
success in many other ways as well-- not the issue here.)  So, the software
clearly worked.  Furthermore, at least the test pilots trusted it to work,
so it is an example of a real system which was trustworthy at first use.

I appreciate having this example drawn to my attention.  Over and over again
I am impressed with NASA sponsored software, and this is another example of
how well NASA software contractors have done their work.  Any reader who
has helped build NASA software should take pride in some of the finest
real-time control software ever engineered.

However, my call was for military battle software.  Landing the shuttle
Enterprise does not qualify on these grounds. (It might not qualify on other
grounds in that the purpose of the space shuttle is not to drop from the
back of a 747 and land sucessfully.  This was only a partial operational
test of the flight software.  The first full operational test was attempting
to put the shuttle in orbit.  If I recall correctly, there was a
synchronization fault in the sofware...         I don't want to quibble.)

If some of you have other NASA real-time control software stories to
contribute, especially if you are willing to make a judgement about how well
it worked the first time, I would greatly appreciate reading your
contributions.  Please send them directly to me, unless you think the
stories have relevance to the purposes of the RISKS bboard.  Thank you.  But
what I am primarily looking for is military battle software experiences.

|The trouble with a debate like this is that it tends to force people
|more and more into idiotic dichotoomized positions.  SDI software would
|obviously be a huge challenge to produce and validate.  I have no hope
|it would work perfectly the first time used; I have no reason to believe
|it wouldn't work partially the first time it was used.  The question of
|how perfectly it has to work is the central one.

I agree with the last sentence cited.  In existing military battle
equipment, when employed in realistic manuvers or in actual battle, there is
a mission to be accomplished.  If the mission is accomplished in the FIRST
ATTEMPT, then this negates Parnas' claim.  If the mission is not
accomplished, his hypothesis stands.  We see that Parnas' statement
satisfies one of the criteria for a scientific hypothesis:  It is rendered
false by one experiment.

One could imagine situations in which the mission is partially accomplished.
With the distructiveness of modern weaponry (and I'm not even including
nuclear devices in this thought), it is usually possible for a disinterested
judge to easily place such partial accomplishment in the Yea or Nay column.
(However, no such cases have yet come to my attention, beyond Herb Lin's
discussion of the Aegis test is his Scientific American article, December
1985 issue.  This test is an obvious failure for the software.  There were
particular requirements which the software failed to meet.)

So I think it perfectly reasonable to attempt to collect data about actual
military software, irrespective of SDI.  Parnas has stated a strong,
refutable claim.  If you will, a testable hypothesis about the software
engineering of military battle software.  The only sort of experiment I can
do is to ask whether any of you, whether any of your friends, peers,
associates, know of any actual experience to the contrary.  It only takes
one such (reliable, honest) piece of such information to refute Parnas'
claim.                                  I'm still waiting.

I remain of the opinion that actual engineering experience teaches some
important facts about the artifactual world in which we live.  Our
engineering successes, our engineering failures, eventually provide an
understanding of what works and what does not.  The successes and failures
place the limits on our ability to understand, in an engineering sense, the
real world.  Put a bit more strongly than I really mean (but it will take a
long essay to explain:  See Petroski's book "To Engineer is Human"),

   Engineering is the design of artifacts, using the accumulation of
   knowledge about artifacts gained through experience with similar artifacts.

|The task is too ill defined to be making statements about whether it can be
|done.   [The task being SDI battle software, dbb]

I beg to differ with this statement.  Pick a mission, any mission for SDI
other that the trivial one that SDI does absolutely nothing at all.  This
becomes the requirement for the battle software.  So far there is no
evidence that the SDI battle software would complete your mission on first
operational use.  There is only evidence that this battle software, like all
battle software, would fail in the first operational use.

Therefore data, facts, about the first operational use of military battle
software are relevant to the question of whether any nontrivial mission for
SDI is possible in actual engineering practice.  This data does make a
difference in attempting to understand whether SDI battle software would or
would not work the first time.

Thank you this opportunity to expostulate.

I remain, still waiting for data to refute Parnas' claims,      Dave Benson

PS. Please send refuting data to benson%wsu@csnet-relay
Mail to: Professor David B. Benson, Computer Science Department,
Washington State University, Pullman, WA 99164-1210.

------------------------------

From: "Lindsay F. Marshall" <ncx%[email protected]>
Date: Thu, 10 Apr 86 08:53:05 gmt
To: [email protected]
Subject: First use - Enterprise
                            [Two messages are collapsed into one, omitting
                            my intervening request for clarification.  PGN]

I must admit that regarding the first shuttle flight, I had heard that
there was a serious computer failure immediately after the vehicle had
been released.

This story comes from Jack Garman, via Tom Anderson. On the first glide test
of the shuttle from the back of a 747 the first two messages on ground
telemetry were : "Explosive Bolts Fired", "Computer No.3 Failed"

       Lindsay

------------------------------

End of RISKS-FORUM Digest
************************
-------