Date: Fri 13 Sep 85 00:47:58-PDT
From: RISKS FORUM    (Peter G. Neumann, Coordinator) <[email protected]>
Subject: RISKS-1.11
Sender: [email protected]
To: [email protected]

RISKS-FORUM Digest       Friday, 13 Sep 1985      Volume 1 : Issue 11

       FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS
                Peter G. Neumann, moderator

Contents:
 SDI and John McCarthy (Charlie Crummer)
 SDI and Safeguard (John Mashey)
 SDI and Robert Jastrow (Herb Lin)
 Some financial disaster cases from Software Engineering Notes
         (three contributions, totalling five reports)

(Contributions to [email protected], Requests to [email protected])
(FTP Vol 1 : Issue n from SRI-CSL:<RISKS>RISKS-1.n)

----------------------------------------------------------------------

Date: Thu, 12 Sep 85 19:00:29 PDT
From: Charlie Crummer <[email protected]>
To:   risks@sri-csl
Subject: SDI and John McCarthy

>Date: 12 Sep 85  0057 PDT
>From: John McCarthy <[email protected]
>Subject: SDI
>To:   [email protected]
>
>... there [is] no principle of computer
>science that says that programs of any particular task cannot be written and
>debugged.

 Computer Science does not contain or deal with the principles operative
 in the writing and debugging of large, nay, HUGE, eclectic, software
 programs.  This is the realm of Software Engineering.  By a similar token
 there is no principle of the theory of random processes that says that
 the works of Shakespeare cannot be written by 1,000,000 monkeys pounding
 1,000,000 typewriters either, in fact in principle that would be one way
 of reproducing these works.  No serious student of Shakespeare who knew
 something about random processes would propose such an undertaking, of
 course.  A mathematician who knew nothing about typewriters and little
 about Shakespeare, however, might if Ronald Reagan pursuaded him that
 the problem should be worked by assigning 1,000,000,000,000 monkeys to
 1,000,000,000,000 typewriters.  In software engineering as well as
 mechanical engineering there is the concept of feasibility to be considered.



>       Now I shall say my opinion about SDI.
>
>If it can be done, it should.

 If you had a gun wouldn't you be more afraid to face a gunman with
 a bullet-proof vest than one without?  If he began deliberately to
 put this vest on as he stood before you with his gun leveled at you
 wouldn't you be inclined to fire before he got the vest on?

>If it affords complete protection that's great,
>and if it affords partial protection, that's good.

 You speak in the present tense but "it" does not exist!  How can
 a non-existence afford anything?  At least one of the basic questions
 is whether it can be made at all.

>The balance of terror is a bad thing.

 Yes, and SDI would only enhance the terror.  The "civilized" world has
 no defensive answer to the terrorists in such mundane places as on airliners,
 let alone in space, and such is not in the offing.

>Here are answers to some counter
>arguments to its desirability.  (a) Joe Weizenbaum says that it attempts a
>technological solution to a problem that should be solved morally.

 MUST be solved between the terrorizer and terrorizee.  When someone's
 out to get you there's no place to hide.  (D. Corleone)

>Alas,
>moral progress has been so slow that almost the only moral problems to be
>even partially solved are those that can at least partially been turned into
>technological problems.

 Not true, viz. cannibalism and slavery.

>For example, the technology of contraception has
>greatly reduced human unhappiness.

 What evidence do you have of that?

>(b) It is argued that the Soviets would
>have to attack at the first sign of deployment.  Every past imminent advance
>by either side has in principle given the other side some temptation to
>strike before it can be deployed.  So far as we know, neither side has even
>come close to giving in to such temptation.  One reason is that the effect
>of any advance is always subject to a probabilistic estimate, so temporizing
>has always looked better than attacking.  Even if SDI works very well, it
>may be that no-one will be able to be sure that it is that good.

 You may be safe in saying that but I hope our leaders are not so cavalier.
 Most serious strategy is based on "worst case" scenarios.

>       However, most likely the main reason has been is that neither side
>ascribes the very worst intentions to the other with certainty.  Each side
>has always said, "Perhaps they don't actually mean to attack us.  Why have a
>nuclear war for sure instead of only a certain probability?"  Anyway the
>Soviets have experienced a period in which we had complete nuclear
>superiority and didn't attack them.
>
>2. My opinion is that if the physics of the problem permits a good
>anti-missile defense the programs can be written and verified.  However, it
>will be quite difficult and will require dedicated work.  It won't be done
>by people who are against the whole project.  Computer checked proofs of
>program correctness will probably play some role.  So will anticipating what
>kind of bugs would be most serious and putting the biggest effort into
>avoiding them.  Having many people go over and discuss all the critical
>parts of the program will also be important.
>

 Whether the physics of the problem admits a good anti-missile defense
 is a paramount question.  It will take much more than dedicated climbing
 of the automatic proof of correctness "tree" to get to the "moon" of
 an "astrodome" over the U.S. a la Reagan's definition of strategic
 defense.


 --Charlie

------------------------------

Date: Thu, 12 Sep 85 22:56:02 pdt
From: mips!mash@glacier (John Mashey)
To: [email protected]
Subject: SDI and Safeguard

I used to work with many of the people at Bell Labs who worked on the
Safeguard ABM; they were competent people who knew how to build complex
systems.  Maybe there were some who believed that it was actually possible
to build a reliable, deployable, maintainable ABM that one could expect to
work in real use; if so, I never met any; most folks did not so believe,
and said so. [They did believe that you could shoot down missiles in
well-controlled tests, because they'd done it; they just didn't believe
it would work when it needed to.]

------------------------------

Date: Thu, 12 Sep 85 20:08:22 EDT
From: Herb Lin <[email protected]>
Subject:  SDI and Robert Jastrow
To: [email protected]
cc: [email protected], [email protected]

   From: John McCarthy <JMC at SU-AI.ARPA>

       At the suggestion of Robert Jastrow, who is one of the main
   scientific defenders of SDI, I made the same point in letters to three
   Congressmen, said to be influential in the matter of SDI appropriations.

Robert Jastrow is certainly a defender of the SDI, but he has admitted
publically in his own Congressional testimony that he does NOT carry
out scientific analyses of anything related to SDI.  He hardly counts
as a "scientific defender."

------------------------------

Date: Fri 13 Sep 85 00:22:19-PDT
From: Peter G. Neumann <[email protected]>
Subject: Some financial disaster cases from Software Engineering Notes
To: [email protected]

I hope that the RISKS Forum will not degenerate into only an SDI Forum, so I
thought I would counterbalance this issue with a new topic.  I have
resurrected a contribution from the July 1985 SIGSOFT SEN, and also preview
some newer cases that will appear in the October 1985 SEN (which is just
about ready to go to press).  (The few of you who are ACM SIGSOFT members
please pardon me for the duplications.)

                        -------

    [FROM ACM Software Engineering Notes vol 10 no 3, July 1985]

Disasters Anonymous 1: A Rose is Arose is (Three) Z-Rose

Now and then I get a story that I cannot print.  (I do have a few, but don't
ask.  I have of course conveniently forgotten them all.)  Here, is one that can
be printed -- although its author must remain anonymous.  Note that the case of
the three extra zeroes resulting from two different assumptions about the human
interface bears an eerie resemblance in cause to the case of the shuttle laser
experiment, which follows after this one.  [PGN]

 A group within my company had a policy of dealing only in multiples
 of one thousand dollars, so they left off the last three digits in
 correspondence to the wire transfer area to make their job easier.
 Other groups, however, had to write out the full amount since they did
 not always deal with such nice round numbers.  One day, a transaction
 was processed that had a value of $500,000.  The person who entered the
 transaction thought that it was from the group who dealt in multiples
 of $1000 and entered it as $500,000,000.  Of course, this was not the case,
 so a $500,000 transaction became a $500,000,000 one.

 The only thing that prevented a disaster was that it was sent to a small
 company that called back to verify the amount, and the error was then
 caught.  However, this was a Federal Reserve transaction and the funds
 had been transferred, but the timing was good and the transaction was
 backed out before it became a disaster.  My opinion is that such critical
 software should have caught the error before the wire was sent to the
 Federal Reserve.

 Another error in a Federal Reserve transfer had to do with multiple
 transactions per communications transfer.  In this case, the Federal
 Reserve software put a pair of nulls in the data that should have been
 translated as blanks.  However, they were stripped out and a $200,000,000
 incoming wire lost.  To maintain the Fed balance, money was purchased
 to cover a deficit that didn't exist -- since the money was a credit.
 This was a substantial monetary loss because of inadequately tested
 software.

                        -------

    [FROM ACM Software Engineering Notes vol 10 no 5, October 1985]

Disasters Anonymous 2: Financial Losses

Our anonymous contributor from SEN 10 3 (July 1985) has come through again.

 Since I sent some disaster reports to you in May, another one has occurred.
 This one caused some financial loss and acute headaches among managers.

 Most large banks subscribe to the Federal Reserve's funds transfer system,
 frequently referred to as "Bankwire".  Our system that connects to Fedwire
 was being upgraded with a new DDA interface to the host to help protect
 against overdrafts.  During a review, it was determined that the software
 was not quite ready, but should be okay to put into production two days
 later.  I cautioned them against doing so since not all of the bugs had been
 resolved, and the software had not been "stress tested" (or whatever phrase
 you wish to use about testing that ensures that it will work in production).

 The first day of production went fine.  However, the main file in the new
 software was an ISAM file that had degraded significantly during the first
 day.  On the second day, that file continued to fragment and started to
 consume a large amount of the system resources.  This slowed response time
 so much that by the end of the banking day, we still had hundreds of wires
 to send to the Federal Reserve.  We had to request extensions every half
 hour for hours to try and squeeze the transactions through the system so
 that the money would get to our customers.

 In addition, the response-time problem and other bugs in the software
 prevented us from knowing our Federal Reserve balance.  Since we must
 maintain some 150 million dollars in our Fed "checking account", this lack
 of information could cause significant financial loss as 1.5 billion dolars
 were posted that day and we were off by hundreds of millions of dollars at
 first.

 Another part of this disaster is that the slow response time caused one
 program to assume that the host was down.  When a transaction finally went
 through, our system would transmit the DDA information, but the host did not
 acknowledge that they already had the wire.  Thus a large number of wires
 were being "double posted" (money sent twice).  At the end of the day, tens
 of millions had been double posted.

 As of this writing, the Fed balance had been straightened out, but not all
 of the double postings had been recovered.  Note that at current interest
 rates, a bank loses $350 per day per million dollars of unused money.

                        -------

    [FROM ACM Software Engineering Notes vol 10 no 5, October 1985]

Disasters Anonymous 3: Insurance, Reinsurance, and Rereinsurance

Perhaps anonymity is contagious.  Re: reinsurance, here is
another letter from a different contributor.

 I'm newly receiving SEN and found the ``war stories'' quite interesting.
 Here are three more.  I would prefer anonymity should you choose to print
 these.

 This first is hearsay (from a former co-worker).  Apparently he and his
 wife had a joint account with a $300 balance.  They needed $200 in cash, but
 due to miscommunication they both made $200 withdrawals - she at a teller's
 window (cage?) and he at an ATM (automatic teller machine) - within minutes
 of each other.  When the dust settled they found that their account had a
 zero balance:  the first $200 withdrawal left a $100 balance, the second
 should have left a negative balance of $100, but the computer generated a
 $100 credit to offset the shortfall.  The icing on the cake was my friend's
 inability to explain/convince the bank of this situation and have them accept
 restitution.

 I need to be circumspect about this second story -- it might well have
 involved fraud.  While a consultant, I was hired to review a reinsurance
 agreement.  The reinsurance industry is an old-boys, ``handshake is my bond''
 industry as insurors frequently offset their risk by selling it (reinsuring)
 to other insurors.  That is, I insure your building for $10,000,000 and
 re-sell all or part of that risk to another firm.  Apparently, late one
 Monday morning (nearly 11:00 a.m. EST), my client got notice across his
 computer network from another firm that it was reinsuring (i.e. off-loading
 risk) to my client to the tune of several million dollars.  The message was
 time-dated Friday evening (6:00 P.M., WST).  As ``luck'' would have it the
 property in question had suffered a catastrophic loss over the weekend.  The
 bottom line was that the message had been sent directly (not through any of
 the store-and-forward services) and the time-date was thus determined by the
 clock-calendar on the sender's computer.  Need I say more?

 Finally, a story told to me ``out of school'' by a friend at one of the
 nation's largest insurance companies.  They apparently are involved in so
 many reinsurance deals that it turned out that they were reinsuring
 themselves.  I.e., Jones reinsured with Smith who reinsured with Brown who
 reinsured with White who reinsured with Smith.  Smith, it turned out was
 paying both Brown and White commissions for accepting his own risk.  The
 computer system was not designed to look beyond the current customer,
 neglecting the loop.

------------------------------

End of RISKS-FORUM Digest
************************

-------