9-Jun-86 23:34:48-PDT,11526;000000000000
Mail-From: NEUMANN created at  9-Jun-86 23:32:59
Date: Mon 9 Jun 86 23:32:59-PDT
From: RISKS FORUM    (Peter G. Neumann, Coordinator) <[email protected]>
Subject: RISKS-3.4
Sender: [email protected]
To: [email protected]

RISKS-LIST: RISKS-FORUM Digest,  Monday, 9 June 1986  Volume 3 : Issue 4

          FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
 Re: Watch this Space (Mark Jackson, Eugene Miya)
 Software developer's liability (Paul Schauble)
 What an Algorithm!! (Brian Bishop)
 Sgt. York's Latrine, and other stories (Mike McLaughlin, Ken Laws)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, nonrepetitious.  Diversity is welcome.
(Contributions to [email protected], Requests to [email protected].)
 (Back issues Vol i Issue j available in SRI-CSL:<RISKS>RISKS-i.j.
 Summary Contents in MAXj for each i; Vol 1: RISKS-1.46; Vol 2: RISKS-2.57.)

----------------------------------------------------------------------

Date: 9 Jun 86 10:57:10 EDT (Monday)
From: [email protected]
Subject: Re: Watch this Space (RISKS-3.3)
To: Eugene Miya <[email protected]>
cc: [email protected]

Your comments on the conflict between reducing bureaucracy and increasing
the number of persons in the loop take us rather far afield from the risks
of computer use...but they are similar to some concerns I've had for some
time, and the "complexity" issue has relevance to this list, so what the heck.

In my opinion one of the *major* challenges facing humans is the need to
find better ways of structuring organizations, and training individuals to
function within organizations.  Our present performance ranges from barely
adequate to abysmal; the current consequences of this performance level are
extremely serious, and the prospects are that these consequences will get
worse.  Blindly intoning "we need less bureaucracy" is no help.

Those are strong statements; let me explain.  When the number of persons
necessary to an enterprise rises much above that appropriate to a single
work-group some *organizational* as opposed to *individual* division of
responsibility becomes necessary.  (Xerox doesn't build copiers by getting
several thousand employees together, telling them all to "build copiers at a
profit," and leaving them to their own devices thereafter.)  As the
compartmentalization of the organization increases, the relationship between
the output of each unit and the goals of the organization becomes less
clear.  "Do your job right" becomes an unsatisfactory performance criterion;
specifications become of necessity more formal.  It becomes possible for
individuals or sub-organizations to prosper by appearing to meet proper
criteria, or by actually meeting improper criteria; such performance may
actually hinder the successful fulfillment of the intended organizational
goals.  Individual behavior tends toward that which is *actually* rewarded
by the organization, as opposed to that which is *stated* to be desired.
It's like entropy; all the forces are toward declining performance, and
because it's a coupled (people/structure) problem the trends are extremely
difficult to reverse.

It is presently fashionable to point to the government as a bad example of
rampant bureaucracy.  This is to an extent fair; I believe there are two
reasons that the problem is generally worse in government than in the
business sector:

 1) We desire of our government that it be one of "laws not of men"; this
 requires formal specification of acceptable performance (laws and
 regulations).  If OSHA published simple, common-sense guidelines ("don't
 unduly endanger your employees") they'd be thrown out by the courts on
 the perfectly sound grounds that the proscribed behavior was undefined;
 instead we get five-page definitions of an acceptable ladder and such.

 2) The constraint on organizational reasonableness which acts on
 business (don't be so unprofitable as to go bankrupt) is somewhat
 stronger than that on government (don't be so expensive and unresponsive
 as to cause the voters to rebel).

But the differences are those of degree, not of kind; I suspect that #1
above is the more important, and I am extremely skeptical of those who
contend that a good dose of free enterprise will serve to solve, by
Darwinian selection, the organizational problem.  And the problem applies to
not-for-profit, military, and all other "large" organizations as well.

Draw what parallels with large hardware/software systems you wish; AI buffs
may note the analogy with the notorious difficulty of programming "common
sense", for example.

Mark

"Absolute truth?  What's that?"
"It's a five-to-four decision of the Supreme Court."
                       -- Dan O'Neil

------------------------------

From: Eugene miya <[email protected]>
Date:  9 Jun 1986 1521-PDT (Monday)
To: [email protected]
Cc: [email protected]
Subject: Re: Watch this Space (RISKS-3.3)

I just came from a televising of Rogers and Fletcher (our own internal TV
feeds).  Permit me to clarify the forthcoming dilemma.  The matter is not
solely a problem of "bureaucracy."  "Bureaucracy" is an artifact, and the
word had a tainted denotation.  Another, perhaps clearer artifact would be
the trend in NASA from a centralized to a decentralized (NASA Centers really
became "Centers") and now back to a more centralized agency (command at NASA
HQ) versus the more decentralized approaches SDI (Cohen et al.) are proposing
(admitted automated).

 Aside:  Are automated bureaucracies any better than human bureaucracies?

The gist of what I hear Mr. Jackson saying is on the nature of organizing
complex systems (a la Simon's Sciences of the Artificial).  I would also
like to point out that Jacob Bronowski pointed out just before he died that
the great challenge facing humans was the balance of individuals (I
extrapolate to include centralized authority) to groups (decentralized).

The point of my posting was to note that we have an interesting juncture and
we should be prepared to note the different paths taken for future
comparisons (and future mis-intepresentations).  Another interesting
thought occurs to me about SDI, but that will be a separate note which I
will Cc: to Arms-d.

Again, the viewpoints expressed are personal and not views of the Agency.

From the Rock of Ages Home for Retired Hackers:

--eugene miya
 NASA Ames Research Center
 [email protected]
 "You trust the `reply' command with all those different mailers out there?"
 {hplabs,hao,dual,ihnp4,decwrl,allegra,tektronix,menlo70}!ames!aurora!eugene

------------------------------

Date:  Sat, 7 Jun 86 23:29 EDT
From:  Paul Schauble <[email protected]>
Subject:  Software developer's liability
To:  [email protected], [email protected]

These two items are from the June 3, 1986 issue of PC WEEK.

 IRS I: The Internal Revenue Service has thrown a chill over the PC software
 business. It recently ruled that creators of computer programs that help
 taxpayers prepare their tax returns may be subject to penalties if the
 program gives bad advice. The ruling will put the software developers on the
 same footing as flesh-and-blood tax advisors:  at risk.

 IRS II: TCS Software of Houston is already in trouble with the IRS. The
 company was contacted by the IRS because its tax-preparation software
 program, Client Tax Series-1040, was listed as the tax preparer on the 1985
 tax return of one Richard P. Jamerson.

The IRS was up in arms because Mr. Jamerson had used a fictitious Social
Security number, hadn't included a check with the tax return, hadn't signed
the return or included a W-2 form.  Fortunately for TCS, Mr. Jamerson owes
no taxes since he doesn't exist.  He is the totally fictitious example that
goes out with the TCS package to show users how the software package works.
Apparently, one of the sample returns was inadvertently mailed to the IRS.

         Paul      Schauble at MIT-Multics.arpa

------------------------------

Date: Fri 6 Jun 86 14:37:26-PDT
From: Brian Bishop <[email protected]>
Subject: What an Algorithm!!
To: [email protected]

>->     Maybe what SDI should really be is a big perimeter around our
>-> borders to stop such things.  Now if someone can just get the algorithm
>-> to distinguish heroin, aliens, and plutonium...

  I don't know about you, but I would be much more afraid of that algorithm
than I would be of a Soviet nuclear attack.

BfB

------------------------------

Date: Fri, 6 Jun 86 16:27:59 edt
From: mikemcl@nrl-csr (Mike McLaughlin)
To: [email protected]
Subject: Sgt. York's Latrine, and other stories

The latrine fan story keeps going around and around.  The radar never saw a
latrine, much less one with a fan.  The Doppler return of a hypothetical fan
on a hypothetical latrine would differ significantly from the fans on a
helicopter.  The story is full of the same stuff as the latrine.  Let's not
fall into it again.
                    [Thanks, Mike.  You've got a lot of fans as we go
                     around in circles.  "Curses, Air-foiled again?"]

------------------------------

Date: Mon 9 Jun 86 22:18:56-PDT
From: Ken Laws <[email protected]>
Subject: Sgt York's Latrine
To: [email protected]

According to 60 Minutes (or was it 20/20?) the DIVAD did not shoot at a
latrine fan.  It was distracted by a small ventilation fan, but I'm not sure
that it even targeted on the thing.  The fan wasn't on a latrine; the
analogy to a bathroom fan was created by a PR man who was trying to explain
to reporters how small it was.  The "software problem" was much easier to
fix than the PR problem.

I'm an expert-systems enthusiast precisely because such bugs do crop up in
all real-world systems.  Expert systems "technology" is a form of
institutionalized hacking -- programming by successive approximation, or
debugging as part of the design effort rather than part of the maintenance
effort.  It's related to the pancake theory ("Plan to throw the first
version away.  You will anyway."), but goes deeper: plan to throw every
version away, but use the current one if you have to.

                 [Perhaps that is the radioactive pancake theory.
                 ("They're too hot to eat, but they're fun to make.
                 If you really get hungry there's always one ready,
                 and it's probably better than starving to death.")  PGN]

Effort continues on optimal algorithms and proofs of correctness, but too
often we optimize the wrong thing or omit real-life complexities from our
proofs.  (Computers are particularly vulnerable.  How do you prove that a
gamma-ray burst during a critical routine won't change a crucial bit?)
Those who build expert systems take the opposite tack: that systems will
always contain bugs, so each piece should be robust enough to function in
spite of numerous sources of uncertainty and error.  This is similar to the
renewed NASA policy that every critical shuttle system have a backup.  I
think it's a healthy viewpoint.
                                       -- Ken Laws

------------------------------

End of RISKS-FORUM Digest
************************
-------