Subject: RISKS DIGEST 12.40
REPLY-TO: [email protected]

RISKS-LIST: RISKS-FORUM Digest  Wednesdy 25 Septembr 1991  Volume 12 : Issue 40

       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

 Contents:
Bell V-22 Osprey - correct sensor outvoted (John Wodehouse)
Challenger O-ring Problem heads topics at conference on ethics (George Leach)
People and Public Screens (Antony Upward, PGN)
Credit bureaus, heisenbugs, and clerical errors (Peter G. Capek)
Electronic locks at Harvard (David A. Holland)
Bad error handling in Lamborghini Diablo engine management (Richard Boylan)
Denver Hacker Hacks NASA (Andy Hawks)
Re: MSAFP, utilities, and all that (Eric Eldred)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in
good taste, objective, coherent, concise, and nonrepetitious.  Diversity is
welcome.  CONTRIBUTIONS to [email protected], with relevant, substantive
"Subject:" line.  Others ignored!  REQUESTS to [email protected].  For
vol i issue j, type "FTP CRVAX.SRI.COM<CR>login anonymous<CR>AnyNonNullPW<CR>
CD RISKS:<CR>GET RISKS-i.j<CR>" (where i=1 to 12, j always TWO digits).  Vol i
summaries in j=00; "dir risks-*.*<CR>" gives directory; "bye<CR>" logs out.
The COLON in "CD RISKS:" is essential.  "CRVAX.SRI.COM" = "128.18.10.1".
<CR>=CarriageReturn; FTPs may differ; UNIX prompts for username, password.
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

----------------------------------------------------------------------

Date: 25 Sep 91 09:23:00 EST
From: "John Wodehouse" <[email protected]>
Subject: Bell V-22 Osprey - correct sensor outvoted

Further information about the V-22 crash from Flight International 18-24
September 1991.

   "A  Bell-Boeing V-22  Osprey tiltrotor is flying again for the first
   time since the crash of aircraft number five on its first flight in
   June.   Aircraft number three has made at least three flights, after
   extensive checks by the US Navy (USN).

   The  USN  has  also  released  a brief report on the accident, which
   reveals that similar faults have been found in two  other  aircraft.
   It  says  that  TWO  roll-rate sensors (my capitals), know as vyros,
   which  provide  signals  to  the  flight  control   computer,   were
   reverse-wired.   In the triple-redundant system the two faulty units
   "outvoted" the correct sensor, leading to divergent roll cycles  and
   a crash shortly after take-off.

   The  report  says  the  cockpit  interface  unit  is  connected by a
   120-wire plug connector in which the vyro unit uses numbers  59  and
   60  -  which  were  reversed.  Examination of aircraft one and three
   revealed that one vyro in each was also reversed.

   The number three aircraft flew for 18min on 10 September in a flight
   cut short by extremely poor visibility.  It flew again the next day,
   and was to complete a third flight on 13 September."

What  worries  me  is  that aircraft one and three were obviously flying
with one vyro reversed-wired for quite sometime.   The  triple-redundant
system  would  have  outvoted this vyro, but why was no indication given
that there was a problem at all.  What confidence does that provide  for
other systems, which depend on voting, if the failure is not reported.

Lord John --- the programming peer

  [We have reported on similar cases in RISKS before.  For example, see
     J.E. Brunelle and D.E. Eckhardt, Jr.,
     Fault-Tolerant Software: An Experiment with the SIFT Operating System,
     Proc. Fifth AIAA Computers in Aerospace Conference, 355-360, 1985,
  where two programs written by different people to the spec of a correct
  program had a common flaw, and outvoted the correct program.  PGN]

------------------------------

Date: Tue, 24 Sep 91 10:53 EDT
From: [email protected] (George Leach)
Subject: Challenger O-ring Problem heads topics at conference on ethics

>From the Tuesday, September 24, 1991 issue of the St Petersburg Times:

       "Event explores ethics in business"

       An engineer who worked on the Challenger says not "doing the right
       thing" can have dire consequences, but so can acting ethically.

       By John Craddock, Times Staff Writer

       TAMPA - When the engineer studying the O-rings on the space shuttle
Challenger suspected they might cause a catastrophe, he told his bosses.  They
listened.  Then they made what former Morton Thiokol engineer Roger Boisjoly
called "a management decision."  That decision launched a tragedy.  The O-rings
failed, and the Challenger exploded Jan. 28, 1986.
       In later statements before a presidential commission and in documents
he produced, Boisjoly showed he had tried to do the right thing.  But for him,
doing the right thing ethically meant the undoing of his professional life.  "I
stepped into quicksand....It was the total destruction of myu career," he said.
       Discussions - and confessions - about ethical behavior and what it
means to professionals, - are the theme of a two-day conference at the
University of Tampa.  The conference ends today.  Titled "Doing the Right
Thing: Revolutions in Professional Ethics," the conference Monday attracted a
blue-chip panel of ethical experts, as well as politicians, lawyers, and
journalists.
       Among those speaking Monday morning was Gov. Lawton Chiles.  He told
the group of about 150 that he doesn't blame the lack of ethical fiber in
recent years on "the mindless materialism of the 1980's" creating a "moral
vacuum across the land."  He said unethical behavior has always been with us.
"I'm not sure we can blame it all on the 1980's," said Chiles, who has been
involved in politics since the 1950's.
       He noted one difference: The lack of surprise when people hear that
a judge is taking bribes or other news of the public trust being betrayed.
"Our citizens are no longer shocked," he said.  That's why political and
business leaders must step out and "be willing and able to do the right
thing."  He then launched into his own campaign to build trust with the
people of Florida and cut state spending. [Note: the St Pete Times reported
last week that the state's projected revenue will fall short by some 623
million dollars - prompting cuts, including in education - gwl]
       Boisjoly, who appeared in an afternoon session, said the anguish
he felt from his experience at Morton Thiokol was two-fold.  He wondered
whether his own protests were strong enough and whether he could have
prevented the Challenger tragedy.  He also said his company came to view
him as a traitor.  The public tends to view whistle-blowers as "good guys,"
he said.  But the perception in government and corporate circles is that
"we're the bad guys.  We're the messengers with bad news."
       Other speakers included Manuel Velasquez, director for the Center
for Applied Ethics at Santa Clara University in California.  He said
business ethics are somehow presumed to be separate from the everyday
ethical decisions people make.  He said people tend to think of business
as a poker game with its own rules.  But business ethic "are not specialized,"
he said, and shouldn't be considered outside the normal bounds of fair play.

------------------------------

Date: 25 Sep 91 07:34 GMT
From: [email protected] (KPMG - Antony Upward,IVC)
Subject: People and Public Screens

I was recently returning to Paris from Birmingham (UK).  Birmingham
international airport has just opened a new terminal, including of course, the
latest in computerised information systems to keep travellers informed.

It appeared that they no longer have a direct link between the screens being
updated with new information (e.g., Flight BA5310 Boarding Gate E, or flight
BM540 delayed 30mins), and a public announcement to the same effect.  The
public announcements seemed to be about 5 minutes after the screens were
updated.

My flights gate details were displayed - Gate E.  I, and about 100 other
passengers, went to gate E, and waited.  There were no airline staff present.

After about 5 minutes of 100+ people waiting at Gate E the public address
system announced, quite calmly (not indicating that the screens were displaying
wrong information), that my flight was boarding at Gate D.   *NO ONE MOVED*.
No one believed the public announcement, even though there were no airline
staff at Gate E.

It was only when one of the airline staff at Gate D wondered why none of the
passengers had turned up that they came in person to investigate.  Of course we
were all waiting at Gate E.  Only then, when the announcement was made in
person, were the information on the screens disbelieved!

It seems, at least on this experience, that a majority of people now `trust'
the information on screens, even when it is directly contradicted by a human
announcement, and by circumstantial evidence that the screens are not correct.

Antony Upward, Apple Computer Europe

------------------------------

Date: Wed, 25 Sep 91 15:14:08 PDT
From: RISKS Forum <[email protected]>
Subject: Re: People and Public Screens

On my previous trip East I discovered an annoying bug in United's display
program.  My flight was not listed on the multiscreen DEPARTURES display.
After checking back several times, I discovered the problem: whichever flight
should appear on the LAST LINE on the FIRST SCREEN of a multiscreen DEPARTURES
display was getting truncated.  An example of off-by-one programming, probably.
I wonder if anyone fixed it yet?

  [I thought I had reported this one previously, but I cannot find it in
  the archives, and it seems too cute and relevant not to include.  PGN]

------------------------------

Date: Tue, 24 Sep 91 00:32:15 EDT
From: "Peter G. Capek" <[email protected]>
Subject: Credit bureaus, heisenbugs, and clerical errors

I reported here recently about the effect which might occur to an individual's
credit rating as a result of many inquiries by, say, car dealers, where that
person was shopping around for a car.  The dealers, in order to assess the
likelihood that a person might buy a car would request a credit report on the
individual, but the effect of repeated such inquiries was to give the
impression that the person was overextending himself.  (RISKS-12.20)

The Wall Street Journal today (23Sep91) reports on credit bureaus and their
difficulties.  Specifically relating to the earlier comment is a description
given by a headhunter who would obtain, from a candidate's credit bureau
report, the names of other firms who had recently requested that report.  He
could then call the candidate and say, quite accurately, "You're applying to X
and Y and Z; why don't you also consider W?"  (I believe that the law
regulating this, the 1971 Fair Credit Reporting Act, requires inclusion in the
report of the names of all those to whom a copy was sent within the last 2
years; was this requirement intended to let the individual know who had seen
the data, or to let the requesters coordinate amongst themselves what credit
had been granted, etc?)

The most interesting item in the article, however, is the intriguing lead, in
which much of the citizenry of Norwich, Vermont, is abruptly flagged as bad
credit risks by TRW.  The problem was ultimately tracked down to an alarmingly
simple error: A person working part time (for a similar, but not apparently
related company) at obtaining public records and feeding them back to the
credit bureaus had been asked to obtain the list of Norwich's delinquent
taxpayers.  She mistakenly got the list of tax receipts and carefully
reported that some 1400 residents -- in a town of 3100 -- were delinquent.  It
took nearly three weeks to clear up; half the delay was simply in getting TRW
to return repeated phone calls.  [It seems as though a reasonableness check on
the (size of the) delinquent list might have averted the problem.]

But the article goes on to shed some light on what may be the motivations of
the credit agencies:  their customers, banks and stores, are anxious to obtain
reports with the largest amount of negative data, thinking that it has the
effect of maximizing their probability of detecting a bad risk.  Since the
bureaus are paid by the organizations to whom they provide the reports, and
not by those whom the reports describe, one is led to speculate on their
motivation.

Risks?  I think the effect of the requirement to record and report the
names of those receiving the report may be seen here to have boomeranged.
Perhaps that problem wouldn't arise if those names were recorded, but only
reported to the individual.  The lesson here may simply be that we need to be
as conscientious in assessing the risks of our solutions, as we are in
evaluating the problems they address.
                                                 Peter G. Capek

------------------------------

Date: Tue, 24 Sep 91 11:50:11 EDT
From: [email protected]
Subject: Electronic locks at Harvard

>From the Harvard Independent, Sept. 19, 1991, pg. 4:

                          The Key to Security

  Computerized ID cards are the wave of the future, but for residents of the
three Union dormitories - Greenough, Hurlbut, and Pennypacker - time seems to
be moving faster than in other parts of the University. The Harvard University
Police Department (HUPD) has replaced the standard entryway keys for each of
these dorms with computerized, credit-card-like key cards. According to HUPD
chief Paul Johnson, the cards prevent unauthorized persons from gaining access
to the dorms, enable the police department to track the use of each key card by
computer, and prevent people from jimmying locks. "It's state of the art," said
Johnson. Union dorm residents feel more secure with the improved locks.  Said
Pennypacker resident Missy Francis '95, "Ninety percent of the upperclassmen
have skeleton keys to the Yard, so this way no one can get into our dorms." If
all goes as planned, other dorms will be wired by the end of the year.

                                   ----

  Now, some of the risks here are obvious: tracking the usage of each key, for
example. I am sure RISKS readers are familiar with the implications of that.
Worse, the article implies that the police are actively aware of the
possibility and may be pursuing it directly. While I have nothing against the
Harvard police, I nevertheless don't see this form of surveillance as a good
thing.
  Of course, the fundamental problem is that skeleton keys to all the dorms in
Harvard Yard are readily available to anybody who wants one and has some vague
idea where to go. This is not a new risk, of course, but I have severe doubts
that throwing technology at the problem will make it go away. There must be
card-keys somewhere that will open all the locks in question; the maintenance
staff needs them. It is only a matter of time before they start circulating
just as freely as any other key. I haven't seen any of these card-keys yet
myself, but it strikes me as highly unlikely that they are not forgeable, and
even more unlikely that (as the article claims) the locks can't be jimmied.
  And none of this even begins to take into account the risks of failure -
power failure, for example, or electronic interference, or any of the other
things that electronic devices are subject to in the real world.

  - David A. Holland                   [email protected]

------------------------------

Date:           Wed, 25 Sep 91 17:14 EDT
From: [email protected]
Subject:        bad error handling in Lamborghini Diablo engine management

This is excerpted (without permission) from an article in the
September 1991 issue of CAR, a British magazine.  In the cover story,
the writer is driving one of the first Lamborghini Diablo automobiles
from the factory back to England:

    "Then, on the outskirts of Annecy, calamity.  The power drops off
    suddenly, there's a soft, metallic buzz, a muffled bang, and a
    much louder, rattling clatter.  The 'right side engine' warning
    light comes on.  Uh-oh, time to coast over to the hard shoulder.

    "Tentatively, we raise the engine cover, lean over the wide
    wings, and peer in.  The right-hand exhaust pipe is glowing like
    the fires of Hades.  The aluminium heat shield surrounding it in
    the bay has melted (aluminium melts at 1000degC), and molten
    blobs trace a glinting trail of our move across the carriageway.
    .  .  .

    "Swiss Air takes us back to the Diablo a few days later.  Factory
    troubleshooters have diagnosed and fixed the problem.  There are
    two engine-management systems, which each look after a bank of
    six cylinders.  If there's trouble on one side, you're still left
    with a straight six to get you home.  Because a wire had fallen
    off one of the Lamdba probes for the cat[alytic converter], the
    right-hand side of our engine was closed down by the chip--hence
    the power loss.  But it seems the fuel wasn't cut off at the same
    time, and as it reached the exhaust it ignited inside the pipe."

The moral of this is that no matter how critical a piece of code is,
the correctness of its error-processing paths is even more critical.
It's ironic that in an attempt to provide fault-tolerance, the
designers of the Diablo engine-management system actually increased
risk.  If the engine had simply shut down entirely when the first
fault occurred, it would have undoubtedly shut down the fuel-delivery
system as well.  But by attempting to keep the engine running in a
degraded mode, they allowed a potentially explosive situation to
develop.

------------------------------

Date: Wed, 25 Sep 91 15:33:05 MDT
From: [email protected] (Andy Hawks)
Subject: Denver Hacker Hacks NASA

The Denver Post, Denver & The West section   p. 1     9/25/91

NASA vs. hobbyist

Computer whiz accused of illegal access, mischief

By. Peter G. Chronis
Denver Post staff writer

An Aurora computer hobbyist who allegedly used a personal computer and his home
phone to penetrate NASA computers hacked off Uncle Sam enough to be indicted on
seven federal counts yesterday.

Richard G. Wittman, 24, the alleged "hacker," was accused of two felonies,
including gaining unauthorized access to NASA computers to alter, damage, or
destroy information, and five misdemeanor counts of interfering with the
government's operation of the computers.

Wittman allegedly got into the NASA system on March 7, June 11, June 19, June
28, July 25, July 30, and Aug. 2, 1990.

Bob Pence, FBI chief in Denver, said Wittman used a personal computer in his
home and gained access to the NASA systems over telephone lines.

The investigation, which took more than a year, concluded that Wittman accessed
the NASA computer system and agency computers at the Marshall Space flight
Center in Huntsville, Ala., and the Goddard Space Flight Center in Greenbelt,
Md.

The NASA computers are linked to a system called Telenet, which allows
qualified people to access government data bases.  A user name and password are
required to reach the NASA computers.

Federal sources declined to reveal more information because the complex case
involves "sensitive material."

Wittman, a high-school graduate, apparently hadn't worked in the computer
industry and held a series of odd jobs.

The felony counts against him each carry a possible five-year prison term and
$250,000 fine.

           [I suppose the Denver authorities locked up his PC to prevent him
           from using it.  They must have used a Denver Boot Load.  PGN]

               [For our out-of-country users, a Denver Boot is a fiendish
               device that police attach to a wheel to prevent you from
               driving your car until you have paid up all outstanding fines.
               Of course, more fines accumulate unless you pay immediately.]

------------------------------

Date: Wed, 25 Sep 91 14:36:43 EDT
From: [email protected] (Eric Eldred)
Subject: Re: MSAFP, utilities, and all that

Do we really need any more discussion of medical statistics and cost/benefit
analysis of tests?  Yes, because after all the verbiage here I'm afraid more
people are more confused than enlightened.

Mark Fulk has pointed out the importance in decision analysis of assessing
relevant utilities, especially those of and by the humans affected by the risk.
He refers to Kahneman and Tversky (apparently as those who note the
subjectivity and often seeming irrationality of individuals' risk assessments
and utility analysis).  It seems pretty clear now that one cannot discuss a
test such as the MSAFP in isolation from utility analysis.  Not all physicians,
and certainly patients, are yet aware that this is true, however, so it could
stand some repeating.

The implication of what Mr Fulk notes is also that perhaps a test should not
even be done without some counseling and interpretation to those affected, and
an entire therapeutic context.  For example, if an amniocentesis result
predicts a certain disease state of the fetus, would an abortion be done
anyway?  Too often physicians do tests defensively, because they would be
accused of malpractice if they didn't give the "standard" treatment to all.
But that is not treating patients as individuals.

For example, in a separate discussion with Jeremy Grodberg, I pointed out that
utility analysis of a particular vaccine choice should involve more than just
the risk of a disease or reaction to the vaccinated individual.  As a good
example, the US CDC (Center for Disease Control) decided after much debate
(part of which was actually filmed and shown on a PBS program) that live polio
vaccine should be used instead of killed virus vaccine.  The latter is possibly
much safer for individuals, and prevents the occasional transmitting of the
virus to unprotected others in close contact (some have died, their families
sued the govt, and they lost).  But the live virus has a possible extra effect
in increasing the resistance of the population taken as a whole (and hence the
CDC chose it).

Thus the risk to the individual is one thing; the risk to the entire population
is another.  Both factors must be taken into account when issuing a vaccine.
It is quite possible, paradoxically, that the risk to an individual could be
increased by a choice of one vaccine over another.  (Here I'm not going to get
into discussion of the risks of the Salk vaccine, which was hastily withdrawn
at an earlier time when the manufacturing of it went awry and created false
perceptions of its risks.)

My argument with the CDC is that they have not yet apparently made it clear
that those performing the vaccination should communicate to patients (or
parents) that the killed virus vaccine could be safer and would be available if
the patient decided for it rather than the live virus vaccine.  In other
countries, the decisions have been made differently.

I believe this is an important point.  Those exposed to risks should be able to
choose responses most intelligently with full information and should not always
have decisions made for them by supposedly more knowledgeable and intelligent
engineers, MDs or politicians.  Often, with secrecy, the necessary uncertainty
of real life, or the fog of war as factors, those decisions prove quite poor
ones and are hard to reverse.  Generally, even rational people are willing to
accept certain risks voluntarily they object to when imposed by a seeming
outside force.  Many teenage smokers don't put much on the chance of getting
lung cancer; 40 years later, they are willing to pay a lot more money than you
would predict, just in order to live a little longer, once they do have cancer.
We should discuss policy openly.

In the interpretation of such tests, it should also be emphasized that--also
perhaps paradoxically--the prior probability of events makes a big difference
in what to make of the test result.  If you redraw Jon Krueger's chart of the
four signal/noise possible outcomes --but place numbers in the boxes instead of
the yes/no text, and then repeat, varying the incidence of the condition (and
thus the numbers in the boxes), you will confirm the basis of the argument
against the MMPI.  A test that has a high predictive value in a population with
a high prevalence of a condition may not be any good at all (less than, say,
50% predictive value of a positive result) should the prevalence be greatly
decreased--even if the "accuracy" of the test stays the same.  (Thus, I believe
pre-employment urine drug tests for programmers are counterproductive.)

Each test should be examined experimentally with two critical measures
reported: the "specificity" and the "sensitivity", or essentially what lead to
what we could call "false positives" and "false negatives".  Without those
measures reported, and without a prior estimate of the prevalence of a
condition in the population tested, it is not really possible to say what to
make of a specific test result.  Hence, counseling and the wise therapeutic
context, by which results can be verified and acted on correctly.

The other interesting implication of the discussion here has been the reference
to the utility put on threshold values, or on the importance of false positives
or false negatives.  We should realize that a medical test for a condition that
could be fatal but might be prevented, and for which a false positive test
result would not lead to needless suffering, anxiety, and so on, could be one
with a larger number of false positives (because it is intended as a screen to
be sure not to miss anybody with the condition), while one for which there
might not be treatment, and for which a positive result might lead to severe
consequences (say, MS, or an HIV test before AZT), might be one that one would
have to be sure would not have a lot of false positives.

Consequently, the threshold values of such tests should be selected so as to
magnify the desired results and minimize the undesired consequences.  It is
quite likely that interpretation of some tests should be withheld until
confirmatory results of other tests with different utility values.  However,
obviously the chances of false results increases with the number of tests, so
testing should be done with their limits in mind.  It seems to be irrational to
mandate reliance solely on such tests as HIV antibodies in arbitrary
populations with unknown or low disease incidence, given what we now know about
testing.

For those who want to look up all this, I'm sorry I don't have the exact
references in hand.  One book that did initiate a lot of talk on the subject is
quite lucid: "Beyond Normality" by Galen and Gambino (I think it was published
by Little, Brown, in about 1976).  Later work by the Tufts clinical decision
analysis group was published in the New England Journal of Medicine in the late
'70s and early '80s, introducing the concept of the variability of patient
assessment of outcome utility.  I think the issues are still important today,
since even the experts can make decisions poorly from time to time, and the
ones who do make them correctly can't always explain the proper techniques to
the rest of us, and so we end up re-arguing the same points.

Eric Eldred     [email protected]

------------------------------

End of RISKS-FORUM Digest 12.40
************************