Subject: RISKS DIGEST 12.58
REPLY-TO: [email protected]

RISKS-LIST: RISKS-FORUM Digest  Tuesday 29 October 1991  Volume 12 : Issue 58

       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

 Contents:
Would you put your rook and bishop out on knights like this? (PGN)
Re: DSA/DSS -- Digital Signatures (James B. Shearer, Ron Rivest)
FDA-HIMA Conference on Regulation of Software (Rob Horn)
UCI computing survives power outage [almost] (Doug Krause)
Re: Swedish election results were delayed (Lars-Henrik Eriksson)
Re: Licensing of Software Developers (John Gilmore)
The risks of "convenient" technology (Curtis Galloway)
Free Call-Back (Lars-Henrik Eriksson)
The flip side of the 1-900 scam (Andrew Koenig)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in
good taste, objective, coherent, concise, and nonrepetitious.  Diversity is
welcome.  CONTRIBUTIONS to [email protected], with relevant, substantive
"Subject:" line.  Others may be ignored!  Contributions will not be ACKed.
The load is too great.  REQUESTS please to [email protected].  For
vol i issue j, type "FTP CRVAX.SRI.COM<CR>login anonymous<CR>AnyNonNullPW<CR>
CD RISKS:<CR>GET RISKS-i.j<CR>" (where i=1 to 12, j always TWO digits).  Vol i
summaries in j=00; "dir risks-*.*<CR>" gives directory; "bye<CR>" logs out.
The COLON in "CD RISKS:" is essential.  "CRVAX.SRI.COM" = "128.18.10.1".
<CR>=CarriageReturn; FTPs may differ; UNIX prompts for username, password.
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

----------------------------------------------------------------------

Date: Tue, 29 Oct 91 9:09:59 PST
From: "Peter G. Neumann" <[email protected]>
Subject: Would you put your rook and bishop out on knights like this?

    Computer Solves Chess Argument
  BALTIMORE (AP) [29Oct91]
  A 25-year-old graduate student solved an ancient chess puzzle by taking a
computer to places no computer has gone before.  The double feat by Lewis
Stiller, a computer scientist at Johns Hopkins University, not only settled an
old chess conundrum.  He opened the door for analysis once considered too
complicated for even the fastest computers.  [...]  By performing one of the
largest computer searches ever conducted, Stiller found a king, a rook and a
bishop can defeat a king and two knights in 223 moves, ending argument over
whether the position is a draw.  Stiller, who works in Hopkins' artificial
intelligence lab, made the search by writing a new program that tapped the
power of a massively parallel computer at the Los Alamos National Laboratories
in New Mexico.
  The computer is actually thousands of processors working side by side on
parts of a program.  Unlike most computers, the Los Alamos machine has 65,536
processors instead of one.  [...]  Stiller devised a way to avoid bogging down
the computer with communications between the processors while it worked his
10,000-line program.  The computer solved the chess problem in five hours after
considering 100 billion moves by retrograde analysis working backward from a
winning position.
  The prod to push the computer came from Noam Elkies, a Harvard mathematics
professor Stiller met on a computer bulletin board.  The two were discussing
computers and chess when Elkies suggested the six-piece endgame Stiller
ultimately solved.  Elkies said the solution goes beyond the gameboard.  "This
is an idea that can be used for a much greater generality of problems than just
chess games," Elkies said in a recent interview.  "The new thing he was able to
figure out was some important ways to allow the parallel computer to work on
the problem."
  The program can solve a five-piece endgame in about a minute and a six-piece
endgame in four to six hours, said Stiller, who said his chess aptitude has
slipped since he took up computer science.
  Kenneth Thompson of Bell Laboratories was the first to use retrograde
analysis to solve chess endgames, the last portion of the game, proving a king
and queen can defeat a king and two bishops.  Thompson's program took weeks to
solve a five-piece endgame using a much slower computer, Stiller said.
  The Thompson analysis led the International Chess Federation to change its
rules on what constitutes a draw.  Before that, the federation said a draw was
any game that couldn't be won in 50 moves after the last capture of a piece or
move of a pawn.  The federation now makes exceptions, Stiller said.

[There was a final comment from Stiller in the slightly longer version that I
saw in the San Francisco Chronicle today, p.A7: ``The actual significance of
this for full chess is minimal because the position is very rare.  For the
practicing chess player, I don't think it is going to have much effect.''

  We already have parameterized openings, spanning many moves in sequence.
  Perhaps now we can get to macroized game endings; in the craze to package
  everything for TV, K,R,B against K,Kn,Kn can simply invoke Stiller and cut
  to the commercial.  Of course, the 223 moves have to be impeccable, or else
  K,R,B might fall into a repeated-move draw play and K,Kn,Kn would ask for
  his quarterback (or halfback, or whatever).

     This is wonderful example of the rapidly moving boundary of the
     computationally possible, perhaps leading nicely into the following
     on-going discussion on cryptographic complexity...  PGN]

------------------------------

Date: Mon, 28 Oct 91 21:16:02 EST
From: [email protected]
Subject: Re: DSA/DSS -- Digital Signatures (Rivest, RISKS-12.57)

        The letter by Rivest posted in RISKS-12.57 contains at least one
blatant error.  Ron Rivest writes: "A convenient unit of computation is a
``MIPS-year''---the amount of computation performed by a one-MIPS processor
running for a year.  A MIPS-year thus corresponds to about 32 trillion basic
operations.  If we assume that a 100-MIPS processor lasts for about 10 years,
we obtain an amortized cost estimate for today of $100 per MIPS-year of
computation."
        The figure of $100 was apparently derived by dividing $1000 the
assumed per processor cost by 10, the assumed lifetime in years.  However the
processors were assumed to be 100 mips.  Hence the correct figure would be $1
per mips-year of computation.  However if in fact computation is getting
cheaper by 40% per year computing equipment will lose 40% of its value per year
for this reason alone (ignoring any physical deterioration).  Therefore the
straight-line 10 year depreciation assumption is totally inappropriate.  One
should instead write off at least 40% in the first year.  This would give a
cost of $4 per mips-year.  Many of the assumptions made throughout the
computation are highly debatable as well.
                                               James B. Shearer

------------------------------

Date: Tue, 29 Oct 91 12:57:04 EST
From: [email protected] (Ron Rivest)
Subject: DSA/DSS -- Digital Signatures

How embarrassing!  Shearer is correct in pointing out my oversight.  I did
forget to divide by 100 in estimating the cost per MIPS-year.  With my
approach, the cost should have been $1 per MIPS-year.  But I like his more
refined suggestion on non-linear depreciation, and find his estimate of $4 per
MIPS-year to be reasonable.

Correcting this error, of course, only strengthens my argument and conclusions.
The cost to an attacker will be 25 times smaller than my estimate.  We now have
as revised conclusions:

       -- An attacker in the year 2017 with $25 million to spend should
          be able to mount an attack of 31.25 billion MIPS-years, not
          1.25 billion MIPS-years.

       -- The security of the proposed DSS, with its 512-bit keys, is
          over 12,500 times too weak, not just over 500 times too weak.

       -- DSS should be attackable today for less than $25 million.
          (Since buying 2.1 million MIPS-years will cost only $8.2 million.)

       -- The required key size (setting L(p) equal to 31.25 billion
          MIPS-years), rises from 640 bits to 710 bits.

The importance of having larger keys should be even more apparent.  While, as
Shearer suggests, there are still debatable points about this analysis, I do
not believe that the overall conclusions would change for a more refined
analysis.

My apologies for the error and resulting confusion.

       Sincerely,                      Ronald L. Rivest

------------------------------

Date: Fri, 25 Oct 1991 16:38 EST
From: HORN%[email protected]
Subject: FDA-HIMA Conference on Regulation of Software

On 9 and 10 October 1991, the Health Industry Manufacturers Association (HIMA)
and the Food and Drug Authority (FDA) had a joint conference to explain FDA
regulation of software.  The following is a summary of highlights from that
conference.  (If you are actually involved with potentially regulated software,
contact the FDA for the complete rules and contact an expert.  This area is as
complex in its details as the tax laws.)

First, what does the FDA regulate?
 1) Under the 1936 Act, any medical device, drug, or practice.
 2) Under the 1990 Safe Medical Devices Act, authority to examine
    devices was expanded.

Software may be involved in any of four ways:
 1) It may be a device
 2) It may be used in the manufacture of a device or drug
 3) It may be used in record keeping
 4) It may be contracted or purchased from a third party for one of the above.

FDA approval involves two steps: approval to market and approval to sell.
Approval to market involves one of two things: 1) A PMA for new medical
technologies (see an expert now).  2) A 510(k) for equivalent medical
technologies (substitutes for some previously approved device).

For a 510(k) approval there are three categories of approval difficulty based
upon the hazard to patients and others:
1) minor, little risk of injury either direct or indirect
2) moderate,
3) major, risk of death

An example of a minor is a urological machine comprised of a funnel, flask,
scale, and computer for measuring urinary function.  It is very hard to hurt
anyone when this machine malfunctions.  A misdiagnosis injury is also very
unlikely because many other measurements and human interventions will take
place before a decision is made.  An example of major is the remote programmer
for a pacemaker.  Death is a likely direct result of a malfunction.

The FDA examination for a 510(k) is proportionate to the risk.  For a minor
risk item the FDA will probably accept a detailed development plan, and
defendable development, configuration control, and validation methodologies.
For a major risk item, they will examine all the validation results in detail
and demand thorough hazard analysis.  They will challenge many details to
assure themselves by spot inspection that the validation is probably complete.

For more details ask the FDA for a copy of the 510(k) reviewers guidance.  This
is the document used by the 510(k) reviewer and is freely available to the
public.

Then comes approval to sell.  This is based upon a Good Manufacturing Practices
(GMP) inspection.  Again, the inspection detail will be a function of the risk
to the patient and others.

For a minor risk item, they might not inspect at all.  Most likely, they just
verify by spot checks that the claims made in the 510(k) are being kept.  For a
major risk item, they may inspect a lot.  If someone actually gets hurt, expect
an army of inspectors swarming over everything.

For software there was little surprise that the inspectors verify all the
claims in the 510(k).  The surprise was in how ancillary manufacturing software
and purchased software are treated.  First, any software might be inspected.
If its failure could lead to injury it is subject to inspection.  This means
that a spreadsheet program on a PC will be subject to inspection if it is used
to compute a quality parameter.  Second, there is no assumption of validity for
off the shelf software.

For more details, the FDA provides copies of GMP practices regulations to
anyone who asks.

In a recent GMP inspection a drug maker was hit with violation notices because
an off-line PC was being used to run a statistical process control package as
part of a process improvement effort.  The SPC was not directly used to control
manufacture or determine quality.  Other equipment handled that.  The problems
listed were:
1) The PC was not under strict hardware maintenance schedule with
   change control and serial number tracking of components.
2) The specific PC hardware configuration was not validated.
3) The SPC program validation was inadequate (the drug manufacturer had
   run and documented test cases before placing it in use).
4) The PC was not regularly backed up
5) There were no documented procedures for disk space management.
6) There was not a documented procedure and records for software
   change and update validation.
7) There was not sufficient security and auditing to assure that
   the software was not changed during use.
The manufacturer was told to fix these problems.  If they were not fixed, the
factory would eventually be shut down.

This attention to software is new at the FDA.  It went into effect this summer
and more regulations take effect this fall.

The other area that is catching people by surprise is the extent of the
definition of device and manufacture.  Most recently, the makers of blood bank
software were hit.  They had not previously realized that the database software
for tracking blood donations was a medical device and probably a class 3
device.  Big time mistake.  About a third of the blood bank software vendors
have been closed, and their software recalled by the FDA.  There is an open
issue around hospital and laboratory information systems.  These may also be
medical devices depending upon how they are used.

As an example: a mainframe manufacturer M ran an advertisement claiming that
since hospital X used M's machines, it could deliver superior care.  By doing
this, manufacturer M has made a medical efficacy claim and converted their
mainframe into a medical device.  In theory, they must now get a 510(k), GMP
inspected, prove the safety of their mainframe, and demonstrate that it does in
fact improve medical care.  In practice, they get a phone call telling them
``Don't be fools.  Stop running that ad.  You don't realize what you are
doing.''

The HIS and LIS vendors are at more risk.  If a failure in an HIS or LIS
software leads to incorrect recording of critical patient information that can
then cause death, they may be class 3.  It depends upon what other safeguards
exist.  If the usage label does not require other safeguards exist, class 3 may
follow.

The FDA approach differs from that of MoD and others in that there is no FDA
approved methodology.  The FDA will not state that anything is guaranteed
acceptable.  Instead you are always subject to challenge.  They claim that this
allows them to accept new methodologies as they are proven.  It also lets them
reject anything and not expose them to the risk of making a decision.  If
anything goes wrong, its your fault and you (not the FDA) are liable.

Rob Horn     horn%[email protected]

------------------------------

Date: Sat, 26 Oct 91 02:06:09 -0700
From: Doug Krause <[email protected]>
Subject: UCI computing survives power outage [almost]

Re: Power outage downs New York Stock Exchange (RISKS-12.55)
> The NYSE was down between 10:21am and 10:45am on Tuesday 22Oct91 because of a
> power outage that downed all of the computers (but not the lights!)

Yesterday (the 25th) we (UCI Academic Computing) had a power failure that took
out our lights and AC but left the computers up.  However, within 15 minutes
the temperature in the machine room had shot up 10 degrees and we needed to
bring all the systems down.  So much fun.

Douglas Krause, University of California, Irvine    BITNET: [email protected]

------------------------------

Date: Mon, 28 Oct 91 10:20:47 +0100
From: Lars-Henrik Eriksson  <[email protected]>
Subject: Re: Swedish election results were delayed (Minow, RISKS-12.56)

Martin Minow <[email protected]> writes about a computer miscalculation
during the recent Swedish elections.

What is even more frightening about miscalculations is how people blindingly
trust computer calculations. As usual, during the evening after the elections,
Swedish TV were continuously presenting forecasts.

At one point, the results from one voting district from the city of Nacka,
outside Stockholm, were shown. The distribution of votes between parties was
completely weird, and the commentators went into great detail explaining how
individual districts could have a distribution of votes that differed
substantially from the national average. They also wondered what particular
factors could have caused the voters of this district to vote the way they did.

At no point did they notice that the proportion of votes given to the
different parties added up to about 140%.

Lars-Henrik Eriksson, Swedish Institute of Computer Science, Box 1263
S-164 28  KISTA, SWEDEN   +46 8 752 15 09

------------------------------

Date: Sun, 27 Oct 91 00:05:24 PDT
From: [email protected] (John Gilmore)
Subject: Re: Licensing of Software Developers

David Parnas steps beyond advocacy to misrepresentation (in RISKS-12.54):

> They assume that the body that issues the licenses is the government.
> That is not the case for other engineers.  In many jurisdictions there
> is a professional body that is charged with this task. In Ontario it is
> the APEO, Association of Professional Engineers of Ontario.  In
> Australia there is an "Institution of Engineers".  Thus, it becomes the
> job of professionals to set the standards for their own profession and
> to enforce them.

He was doing fine until he came to `...and to enforce them'.

We already have plenty of organizations in the computer field who issue
licenses to people after testing their competence.  Universities that
issue degrees are a good example.  The Certified Data Processor exam is
another.  Professionals setting the standards for their own profession,
just like he said.

What's different is that nobody is forced to get one.  The licenses
Mr.  Parnas mentions to us are in fact *issued* by private boards, but
*enforced* by the government.  A law states that to practice that
profession, you must get a license from the board; failure to do so
results in civil or criminal penalties.  The boards are `private' in
the sense that the government does not contribute funds to them, but
they hold the government's monopoly-creating power.  He even mentions
that this varies by `jurisdiction' (government control boundary).

We had to defeat such a proposed law in the New Jersey jurisdiction
this year.  It turned out to be easy, since the legislator who
introduced the bill knew nothing about the industry, and was willing to
be corrected by feedback from the people actually affected.

Each of us has opinions, and everyone holds at least one opinion that differs
significantly from the common opinion on that topic.  Though Mr. Parnas and the
gentleman from Praxis differ from the mainstream on this issue, they don't
deserve to be called `crackpots'.

John Gilmore, Licensed Libertarian, Free Software and Crypto-Privacy Crackpot

------------------------------

Date: Mon, 28 Oct 1991 21:53:45 PST
From: Curtis Galloway <[email protected]>
Subject: The risks of "convenient" technology

Mark Bartelt <[email protected]> writes about his experience with
an ATM:

>I'm just lucky that I was only trying to make a deposit.  What if it had been
>an emergency situation, where I needed cash quickly?  If an ATM is down, I can
>generally find another one that works.  But if this were my only account, and
>all ATM access is denied, I'm out of luck.

Back in the "good old days" before ATMs, you were always out of luck outside
normal banking hours!  But this brings up an important point: when is it OK to
give in to "convenient" technology?

For example, many people where I work keep their telephone number list on their
computers.  When the computer is down, they can't look at their phone list.
(Of course, the thoughtful people print out their list occasionally.)

Relying on convenient technology is very tempting, and leads to a certain
amount of risk.  It's convenient to balance your checkbook with your home
computer because you don't have to do it by hand.  But if you don't do it by
hand, then what do you do when your hard disk crashes?  I'm oversimplifying, of
course, but you get the point.

I long ago learned not to depend on ATMs always being able to give me money,
but I do admit to still relying on some RISKy technology for the sake of
convenience, even though it sometimes fails me.

Curtis Galloway, The Santa Cruz Operation, Inc.   uunet!sco!curtisg

------------------------------

Date: Mon, 28 Oct 91 10:07:26 +0100
From: Lars-Henrik Eriksson  <[email protected]>
Subject: Free Call-Back

There are many risks involved with new computerised services on telephone
networks. Recently a poster pointed out how you can use "900" for fraud. I had
the following experience in using a Swedish phone booth earlier this year:

I was going to make a long distance call from a public phone booth.  The number
I called was busy for a long period of time. After being fed up with trying to
call again and again, I thought that if the payphone was connected to a
computerised switch, it might have automatic callback from busy numbers. On
getting the busy signal I dialed the code for automatic callback and waited.

After about five minutes the phone rang, I answered and was connected to the
number I had dialed.  The twist is that on my unsuccessful attempts to call,
all coins were returned, since I received a busy signal. When the automatic
callback took place, the payphone didn't require any coins, since *it* was
being called! The effect was that I could make my call without having to pay
anything.

I have heard rumours that Swedish Telecom has now disabled this service on
payphones.

Lars-Henrik Eriksson, Swedish Institute of Computer Science, Box 1263
S-164 28  KISTA, SWEDEN   +46 8 752 15 09

------------------------------

Date: Sat, 26 Oct 91 09:01:43 EDT
From: [email protected]
Subject: The flip side of the 1-900 scam

The building where I work deals with the 900 problem by prohibiting
900 calls from all phones in the building, period.

This fact was discovered by one of my colleagues when a commercial software
package he was using in his work didn't behave the way he expected.  It turns
out that the vendor provides technical support via a 900 number so he couldn't
call them.

He couldn't call them with his telephone credit card, either -- when they say
no calls to 900 numbers, they mean it!  He finally had to go home and call from
there.
                                       --Andrew Koenig

------------------------------

End of RISKS-FORUM Digest 12.58
************************