Subject: RISKS DIGEST 14.08
REPLY-TO: [email protected]

RISKS-LIST: RISKS-FORUM Digest  Saturday 21 November 1992  Volume 14 : Issue 08

       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

 Contents:
Installer Programs (Macintosh) (Mark Thorson)
Election hardware and software problems (A Urken)
How to talk about risks (Alan Wexelblat, Stuart Wray, Rob Cameron,
   Mike Coleman, Pete Mellor, L. Bootland,
Re: Software Reliability - how to calculate? (Pete Mellor to Janet Figueroa)
Re: POLL FAULTING recommended for RISKS folks (Pete Mellor)
Advanced technologies for automotive collision avoidance (seminar) (Pete Mellor)
Wanted: GRADUATE PROGRAM in RISKS (Simson L. Garfinkel)
"The Information Society" (Bob Anderson)
An airline software-safety database? (Dave Ratner)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in
good taste, objective, coherent, concise, and nonrepetitious.  Diversity is
welcome.  CONTRIBUTIONS to [email protected], with relevant, substantive
"Subject:" line.  Others may be ignored!  Contributions will not be ACKed.
The load is too great.  **PLEASE** INCLUDE YOUR NAME & INTERNET FROM: ADDRESS,
especially .UUCP folks.  REQUESTS please to [email protected].

Vol i issue j, type "FTP CRVAX.SRI.COM<CR>login anonymous<CR>AnyNonNullPW<CR>
CD RISKS:<CR>GET RISKS-i.j<CR>" (where i=1 to 14, j always TWO digits).  Vol i
summaries in j=00; "dir risks-*.*<CR>" gives directory; "bye<CR>" logs out.
The COLON in "CD RISKS:" is essential.  "CRVAX.SRI.COM" = "128.18.10.1".
<CR>=CarriageReturn; FTPs may differ; UNIX prompts for username, password.

For information regarding delivery of RISKS by FAX, phone 310-455-9300
(or send FAX to RISKS at 310-455-2364, or EMail to [email protected]).

ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

----------------------------------------------------------------------

Date: Wed, 18 Nov 92 20:20:03 PST
From: [email protected]
Subject: Installer Programs

I recently had a terrible experience (involving loss of several hours of work
time and much swearing) with a print spooler program for the Macintosh.  As
with many such programs, you put it on the system by running an installer
program.  The installer performs all sorts of modifications to the system
software, and basically you just gotta cross your fingers and hope the software
works okay.  This one didn't.

As soon as I attempted to print something, the system no longer saw my printer
and the printer would go into an alarm condition every time I tried to print
something.  Calling the software company was no help, I probably spent 1/2 hour
listening to an NPR station in Louisiana hoping they would pick up the phone.

I tried de-installing the software, by putting the old system up from backups
and tracking down all the files the installer had inserted into my system.
This was a lot of work, and had no effect on the problem.  This is an important
point I'd like to make: any installer program should also have the capacity to
de-install, so that the user has at least one back-tracking path if something
goes wrong.

A de-install feature probably wouldn't have helped me out, though.  The problem
turned out to be that the program accidentally changed one of my system
defaults.  Without asking, it activated Applelink (the networking system) which
normally uses the line printer port.  That caused my default printer port to
switch to the modem port.
                             Mark Thorson  [email protected]

------------------------------

Date: 19 Nov 1992 21:23:13 -0500 (EST)
From: [email protected]
Subject: Election hardware and software problems

Recent discussion of problems with election equipment and software has not
included any mention of the efforts of the Federal Election Commission and
National Association of State Election Directors (NASED) to promote the
development of standards and certification testing of election systems.

NASED just recently recognized the first national Independent Testing Authority
(ITA) to conduct testing of election equipment and software.  Election
Technology Laboratories (ETL), a joint operation of Stevens Institute of
Technology and U.S. Testing Company, is now ready to conduct hardware,
electrical, environmental, package, software, and human factors testing of
election systems.

We are very interested in learning about election system breakdowns.  Since
election officials do not typically advertise the breakdown of election
systems, it would be good to hear from anyone in the RISKS FORUM about their
experiences.

------------------------------

Date: Wed, 18 Nov 92 23:49:48 -0500
From: Post-Postmodern Man <[email protected]>
Subject: How to talk about risks (Xantico, RISKS-14.07)

There is a great body of literature about the way people make decisions and the
systematic biases in their thinking.  Just to cite one example, people tend to
be risk-taking when a question is framed in terms of possible gain, but
risk-aversive when the question is framed in terms of possible losses.

Perhaps the best single reference on this is the book "Judgement Under
Uncertainty: Heuristics and Biases", edited by Kahneman, Slovic, and Tversky.
(My copy is almost 10 years old, but it was published by Cambridge University
Press, ISBN 0-521-28414-7).  Kahneman and Tversky in particular have spent a
long time analyzing the systematic biases in peoples' judgement and have
published many papers on the topic.

I highly recommend studying their work if you are in a position either of
having to present probabilistic risks to others or of having to evaluate such
risks presented to you.

--Alan Wexelblat, Reality Hacker and Cyberspace Bard,, Media Lab - Advanced
Human Interface Group   [email protected] 617-258-9168  [email protected]

------------------------------

Date: Thu, 19 Nov 92 14:31:14 +0000
From: Stuart Wray <[email protected]>
Subject: Re: How to tell people about risks?

> Any comment on how to communicate risk information so that people get a correct
> understanding, especially when you are informing people about very small risks?

Perhaps a good way to communicate information about risks is to tell people the
odds and compare them with the odds of various every-day disasters.  For
example (off the top of my head):

  Odds of dying in a car accident in the next year: 1 in 10000
  Odds of a mother dying during childbirth:         1 in 10000
  Odds of middle-aged adult dying in the next year: 1 in 1000
  Odds of a young child dying in the next year:     1 in 100
  Odds of a baby dying during childbirth:           1 in 100

People are usually given insufficient credit for being able to compare risks
themselves, perhaps because they are seldom handed straight numbers by experts.

Stuart Wray, Olivetti Research Limited, 24a Trumpington Street, Cambridge CB2
1QA ENGLAND   [email protected]  +44 223 343204     fax: +44 223 313542

------------------------------

Date: Thu, 19 Nov 1992 09:21:35 -0800
From: Rob Cameron <[email protected]>
Subject: Re: How to tell people about risks?

Xavier Xantico asks how to communicate risk information to the public in a
meaningful manner.

Suggestion: it would be nice to see the "automobile accident" and the
"cigarette smoking" standards.

Examples (purely fictional):
"The risk of your child seriously injuring himself in 3 hours of
playing with this toy is about the same as that of being an automobile
passenger for 4 minutes."

"The risk of long-term liver damage from this medication is approximately
the same as the risk of cancer from smoking 2 packs of cigarettes."

I realize that there would be serious problems in accurately calibrating
such "standards," but I think they would be very valuable in giving
people a clear understanding of risks.

Robert D. Cameron, School of Computing Science, Simon Fraser University
Burnaby, B.C.,  V5A 1S6     [email protected]

------------------------------

Date: Fri, 20 Nov 92 10:35:08 GMT
From: [email protected] (Mike Coleman)
Subject: Re: How to tell people about risks?

In a previous article, Xavier Xantico (?) writes about the difficulty of
communicating statistical risks to ordinary people.  The Richter scale seems to
be successful in allowing people to think about and compare seismic events
(earthquakes); perhaps we can develop a similar scale for statistical risks.

I suggest a logarithmic (base 10) scale based on the expected lifetime
frequency of the event (and population) in question.  To avoid confusion, a
"lifetime" should be defined as a fixed duration, 80 years perhaps, since many
interesting events will be fatal.

Thus, an event scoring 0 on this scale would be expected to happen once per
lifetime (per person), an event scoring +1 ten times, etc.  The event "dying in
an airline crash" might score -6, "dying of cancer" -1, "hearing a campaign
promise" +3, and so on.

The approximate useful range of the scale would be -14 (a 1 in 5000 chance of
occurring once in the lifetimes of 5 billion people) to +11 (expected to
happen tens of times per second for each individual).

One or two canonical events could be chosen for each integer on the scale to
make the scale easily accessible to nonstatisticians (such as myself).

--Mike Coleman ([email protected]), Lord High Executioner of Anhedonia--

------------------------------

Date: Fri, 20 Nov 92 10:57:11 GMT
From: Pete Mellor <[email protected]>
Subject: Re: How to tell people about risks? (Xavier Xantico, RISKS-14.07)

> Any comment on how to communicate risk information so that people get a
> correct understanding, especially when you are informing people about very
> small risks?

Well, what you *don't* do is say:

      "Every time you do X, it knocks n days off your life!"

Such statements are hopelessly misleading to anyone not familiar with the
concepts of "distribution of time to failure" and "mean time to failure".
It is unfortunate that many government health education statements are
couched in precisely those terms.

Other than that, you must get across a few of the basic ideas of probability
theory. The "urn" model is quite intuitive: a 0.01 probability is the
probability of pulling the black ball out of an urn containing 99 white balls
and one black ball.

The best introduction to probability for the non-mathematical reader that I
have read is "How to take a chance" by Darrel Huff (perhaps better known for
his other popular book "How to lie with statistics").

Also try:

"Making Decisions", D.V. Lindley, John Wiley & Sons, 2nd Ed., 1985

Lindley is a bit more demanding than Huff. (Also Huff teaches you how to play
good poker! :-)

To see if the ideas have been understood, ask the "student" what is wrong with
the following:

A businessman, who needs to fly a lot in order to do his job, is terribly
afraid that there will be a bomb on board one of his flights. He asks his
friend (a probability theorist) what to do about it.

"Well!" says the friend, "What do you think are the chances that there
will be a bomb aboard your flight?"

"Judging by recent statistics," says the worried flyer, "it could be about
1 in ten thousand."

"So what would be an acceptable risk?", says the friend. "How about 1 in
100 million, which is the probability of two separate bombs being on board?"

"Yes, that would be OK."

"Fine! So all you have to do is, every time you board a flight, make sure you
carry a bomb!"

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB, Tel: +44(0)71-477-8422, JANET: [email protected]

------------------------------

Date: Sat, 21 Nov 92 13:58:58 WET
From: [email protected]
Subject: Re: How to tell people about risks? (Xantico, RISKS-14.07)

One thing you have to remember is that the risk of something happening isn't
the only factor to be considered when deciding whether or not to accept that
risk.  The consequences of the event, its preventability, and the likely
consequences of alternative actions also have to be considered. To return to
your example, I have a fungal infection in the nail beds on one of my feet.
The only cure is to take tablets that ensure there is a fungicide in your
blood, and hence fungicide reaching the nailbed. Now this drug occasionally
destroys the liver, and doesn't always work. Without a liver I would be dead.
The risk is low but the benefits from the drug are minor, I can live with
tickly toenails, especially as I can treat the itch.  It isn't worth the risk
in my opinion. If I had a mestasised cancer on the other hand and the drug of
choice could destroy my liver I would take it.

I once went to a talk on people's assessments of the risk of having a
handicapped baby, and how likely people were to risk amniocentesis/chorionic
villi testing (which has a risk of inducing a miscarriage).  The lecturer was
startled that the willingness to be tested varied with whether or not people
already had a handicapped child, how many normal children they had, and the
severity of the handicap as well as the risk that the child was handicapped.
He didn't see that the greater impact the care of a handicapped child would
cause of the more important it was that it didn't happen and the less important
the relatively fixed "cost" of a miscarriage became.

He was also horrified that people who didn't want an abortion would want to be
tested.  I think that makes sense.  Knowing gives you time to prepare, and
there may be a miscarriage (even if the latter option isn't admitted to
themselves).

Basically what I am saying is that people aren't misjudging risks as often as
is assumed by professionals, they are taking other factors into account as
well.

L. Bootland, University of Edinburgh, Genetics        [email protected]

------------------------------

Date: Fri, 20 Nov 92 18:00:23 GMT
From: Pete Mellor <[email protected]>
Subject: Re: Software Reliability - how to calculate?

To: [email protected]

Janet,

I don't normally read comp.software-eng. (It takes up enough of my time reading
comp.risks and a couple of others!) However, your message was passed on to me
by Dave Bolton in the Computer Science Department here. (Thanks, Dave!)

I hope you don't mind, but I have sent this reply, including a copy of your
original message, to comp.risks also, since it is relevant to a recent
discussion there. Perhaps you could forward it to comp.software-eng as well,
since I am not sure of the distribution address for that list, and I'm too
lazy to look it up right now!

----- Begin Included Message -----

>Newsgroups: comp.software-eng
>From: [email protected] (Janet Figueroa (x37973))
>Subject: Software Reliability - how to calculate?
>Summary: Require reference for software reliability calculation
>Keywords: software reliability reference calculation
>Organization: Clinical Diagnostics Division, Eastman Kodak Company
>Date: Tue, 17 Nov 1992 18:21:19 GMT

I just had an interesting discussion with a peer who is an electrical engineer.
It was about our reliability budget in which we have set a goal of a certain
number of service calls per year upon the introduction of the instrument.  She
asked if there was a methodology followed in calculating reliability for
software.  One of the examples we were discussing was code that resides in a
PROM.  If there was an error in the software and our service representative
will have to replace the PROM, how would that be looked at?

I told her point blank that I was not aware of any process used to
calculate software reliability.  Is there one?  I would be very
interested to know how the reliability is measured in an application
programming environment as compared to how it is determined for a
device driver, for example.

Any insights would be appreciated.  Thank you very much

Janet C. Figueroa, [email protected]

----- End Included Message -----

> She asked if there was a methodology followed in calculating relaibility for
> software.

There certainly is!  Basically, you observe the software during its later test
and trial phases, when it is reasonable to assume that it is being run under
conditions which are a fair approximation to its eventual operational
environment.

You record the execution time and the failures. You diagnose the cause of each
failure, and strip out from your data set solely those failures which are due
to activations of *new* faults (i.e., your final data set consists of instances
of detecting new faults over execution time).

Unless something is wrong with the environment in which you are running
your software, you will observe a decreasing rate of finding new faults
over execution time. You can do a statistical analysis of the growth in
reliability as you debug the software, and estimate such quantities as:-

 - current rate of finding new faults,

 - number of faults that will be found in a given period,

 - median time to failure,

 - extra testing time required to get to given target levels of any of the
   above.

To do the analysis, you can apply various software reliability models. The
estimation process involves estimating the values of the model parameters from
the observed history of finding faults.

Be warned! :-

1. Different reliability models sometimes give different predictions based
  on the same set of data. (However, there are statistical methods which
  can measure the bias in the predictions from each model, correct for this,
  and so reconcile the predictions.)

2. The whole procedure depends *crucially* upon the observations being made
  in the same operational environment as that in which the software will be
  used in service.

3. The measure of "execution time" employed must be a meaningful measure
  of the degree to which the software has been used, taking account of the
  type of software and its application.

4. Serious problems arise if very high reliability is required. (Basically,
  the statistical methods require a reasonably large data set of faults
  found in order to work with any degree of accuracy. For ultra-high levels
  of reliability, such as those required for safety-critical systems, the
  observation of even one failure during the trial period means that the
  software is not good enough!)

Apart from the restriction regarding ultra-high reliability requirements,
the methods can be applied to any type of software, whether resident in a
PROM or not. They have been applied "in anger" to all types of software where
a modest failure-rate is acceptable, including operating systems and
application software, and have given reasonably accurate predictions.

Given that you are envisaging a situation where support representatives *will*
be deployed (i.e., this system *is* going to fail, isn't it? :-) these are
exactly the sort of methods you should be employing.

Logistical calculations can be based on the reliability estimates.
For example: How do we decide when to issue a "bug-clearance" release on a new
PROM, in order to achieve the maximum cost-effectiveness of our support
operation?

There is a voluminous literature on all of these problems, and CSR has
specialised in them for a number of years. If you require any further
information, copies of papers, etc., do not hesitate to get in touch.

Regards,

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB, Tel: +44(0)71-477-8422, JANET: [email protected]

------------------------------

Date: Fri, 20 Nov 92 10:22:42 GMT
From: Pete Mellor <[email protected]>
Subject: Re: POLL FAULTING recommended for RISKS folks (Mercuri, RISKS-14.07)

Rebecca Mercuri ([email protected]) writes:

>         Many of the RISKS postings point to the inadequacy of software
>         engineering methodologies and practices, yet few colleges and
>         universities offer COMPREHENSIVE courses in SW Eng. and far fewer
>         REQUIRE them as part of core curricula for the next generation of EE
>         and CS professionals.

City University now offers (starting this year, i.e., Oct. 92) a BEng in
Software Engineering as a first degree. The BSc in Computer Science has for
many years included both core and optional modules on Software Engineering,
which are also taken by students on other degree programmes (Mathematics,
Computer Engineering, Economics and Computing, etc.)  Similar modules are also
offered as options on several Master's courses.

Having been heavily involved in teaching many of these courses, and being
part of the design team for the new BEng in SW Eng., I would be extremely
interested to hear the views of other RISKS folk about what should go into
a "COMPREHENSIVE" course in SW Eng.

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB, Tel: +44(0)71-477-8422, JANET: [email protected]

------------------------------

Date: Fri, 20 Nov 92 13:21:09 GMT
From: Pete Mellor <[email protected]>
Subject: Advanced technologies for automotive collision avoidance (seminar)

RISKS readers may be interested to in the following seminar. Some UK readers
may even be able to come along. (All welcome!)

      ************************************************************
      *       City University Computer Science Seminar           *
      * Advanced technologies for automotive collision avoidance *
      *        Raglan Tribe, of Lucas Automotive Ltd.            *
      *     Wednesday 25th November, 2 - 3pm, room A230          *
      ************************************************************

Every year, in the 12 countries of the European Community, some 55,000 people
are killed in road accidents. The European vehicle manufacturers have joined
together with component suppliers to share research on improved safety and
traffic efficiency.

A team at Lucas is collaborating with Jaguar to develop collision avoidance
systems suitable for fitting to the majority of road vehicles. Basically, the
system will use various sensing methods -- optical, infrared, and radar -- to
detect where the road is, and the location of other objects (vehicles, people,
stationary objects) that are likely to cross the path of the vehicle equipped
with the system. The system then assess what threat these objects pose, and
assists the driver in selecting the best course of action.

Enormous processing power is necessary to extract edges in the video image,
before detecting vehicles and road or lane edges. Some 3000 million
operations per second must be carried out for real-time working. The next
stage of the project will investigate the fusion of microwave and infrared
sensors to provide a more reliable view of the world around the vehicle.

All are welcome to come to this seminar on Wednesday the 25th November,
from 2 - 3 pm in A230. We are very fortunate to be able to listen to
Raglan Tribe, of Lucas Automotive Ltd., who will also be presenting a
short video of the system. Please contact Geoff Dowling on 071 477 8442,
or e-mail [email protected] to confirm these arrangements in case
of last minute changes.

Peter Mellor, Centre for Software Reliability, City University, Northampton
Sq., London EC1V 0HB, Tel: +44(0)71-477-8422, JANET: [email protected]

------------------------------

Date: Fri, 20 Nov 92 09:33:59 -0500
From: [email protected] (Simson L. Garfinkel)
Subject: Wanted: GRADUATE PROGRAM in RISKS

I am looking for a PhD program in Computer Science or in Technology and Public
Policy programs in the US or Canada in which one could specialize in the topics
familiar to RISKS readers.

So far, I have only heard of the Computing, Organization, Policy and Society at
UC Irvine. Dr. Rob Kling heads a program on RISKS and related issues.  It
sounds good.

Any others out there?

------------------------------

Date: Wed, 18 Nov 92 15:55:50 PST
From: Bob Anderson <[email protected]>
Subject: "The Information Society"

  Dr. Stephen Lukasik has agreed to act as guest editor of a special issue of
"The Information Society" journal addressing errors in large databases and
their social implications.  Attached is a brief prospectus for this special
issue.

  If you know of data or research reports relevant to the topics mentioned in
this prospectus, or can suggest relevant authors or are interested in
submitting a manuscript yourself, please contact one of us.

  Also, I would appreciate your forwarding or posting this message to others
that may have interest in this topic.

  If anyone receiving this message is unfamiliar with the journal and its
focus and interests, I would be happy to supply additional information.  (For
those on the RAND mailing list: It is available for routing or perusal in the
RAND library.)  Thanks for your assistance.
                                               Bob Anderson

       - - - - - - - - - - - - - - - - - - - - - - - - - - -

                  ERRORS IN LARGE DATABASES AND THEIR
                          SOCIAL IMPLICATIONS

                     Prospectus for a special issue
                   of the Information Society Journal

With the growth of information technology over time, we are becoming
increasingly affected by data in electronic databases.  The social and business
premise is that electronic databases improve productivity and quality of life.
The dark side of all this is that these databases contain errors, most trivial
but in some cases they contain errors that by their nature impose a penalty on
society.  The penalties can range from minor annoyance and modest
administrative cost in having a record corrected, to more serious cases where
more costly consequences ensue, to conceivably, loss of life or major loss of
property.

The consequences to society of errors in electronic databases can be expected
to increase, probably at an increasing rate.  Some factors contributing to this
expected increase are the increasing extent, in both size and coverage, of
existing databases; increasing capture of data by automated transaction
systems, from text and image scanners and the like; greater coupling of
databases, either by administrative agreements or by more sophisticated search
processes; more "amateur" database administration with increasingly widespread
use of information technology; increasing use of heuristic search techniques
that lack "common sense;" and probably other well-meaning but pernicious
influences.

The purpose of the proposed issue is to accomplish the following: (a) increase
recognition of, and awareness of the growing nature of the problem of errors in
electronic databases that are increasingly becoming regulators of modern life;
(b) encourage greater attention to the collection of error rate data and to
quantitatively assessing the social and economic costs deriving from those
errors; (c) foster theoretical and empirical studies of the propagation of
errors through the coupling of, or joint search of, multiple databases; and (d)
encourage the formulation of measures, in both technology and policy domains,
designed to limit the costs accruing from the inherent growth in size and
connectivity of electronic databases.

We seek papers for the issue that will focus on the following aspects of the
problem addressed here: (1) an enumeration of socially relevant databases,
whose errors can have important consequences, either from a large number of
small unit cost consequences or a small number of high cost consequences; (2)
quantitative data on errors in databases, classified by the nature of the
errors and their derivative costs; (3) obstacles to a full and open discussion
of the problem such as those deriving from concern over legal liability and
loss of business from "owning up" to the problem; and (4) proposals for
technical and policy measures that can limit the growth of the problems
addressed.

The premises of the journal issue are: (1) that the problems of errors in
databases can not be minimized until they are adequately recognized and fixes
explored by the professionals in the field; and (2) that we must move from the
anecdotal level, where horror stories abound, to a quantitative level where the
economics of fixes, either in quality control at the point of data collection,
or the quality control of the output of database searches, can be sensibly
analyzed.

Your interest in contributing to this special issue is invited.  Suggestions
for possible topics, authors, or an interest in contributing should be
communicated to one of:

    guest editor:                      editor-in-chief:
        Dr. Stephen Lukasik                Dr. Robert H. Anderson
        1714 Stone Canyon Road             RAND, P.O. Box 2138
        Los Angeles CA  90077              Santa Monica CA  90407-2138
        net: [email protected]              net: [email protected]
        tel: (310) 472-4387                tel: (310) 393-0411 x7597
        fax: (310) 472-0019                fax: (310) 393-4818

------------------------------

Date: 20 Nov 92 19:08:47 GMT
From: [email protected] (Dave "Van Damme" Ratner)
Subject: Airline Software-safety database

I am posting this for Robert Ratner, Ratner Associates Inc, which does
international consulting in air-traffic control and aviation safety issues.  He
is looking for a public-accessible data base on software-related incidents in
this area.  Email correspondence can be sent to me at [email protected].
Thanks.            Dave "Van Damme" Ratner    [email protected]

                      [I suppose he should be reading RISKS!  PGN]

------------------------------

End of RISKS-FORUM Digest 14.08
************************