Subject: RISKS DIGEST 17.51

RISKS-LIST: Risks-Forum Digest  Monday 4 December 1995  Volume 17 : Issue 51

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Costs of 1999->2000 date fix (James K. Huggins)
CD-ROM that hoses your hard drive (Stanton McCandlish)
Re: Sex, Lies and Backup Disks (Tom Wicklund)
How's your spell? (Peter Ladkin)
Re: Spelling Correctors (Edward Reid)
New software that is just too clever (Malcolm Farmer)
Ambiguous abbreviation: what does "NCSA" mean? (Jonathan Thornburg)
Industrial espionage  0.5% (David Lifton)
Re: risks in medical equipment (Pete Mellor, Bridget Moorman,
   Erik Hollnagel, Jay Harrell, Kenneth Albanowski, Steve Branam,
   Robert J Horn, Rochelle Grober)
ABRIDGED info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Mon, 4 Dec 1995 13:50:13 -0500 (EST)
From: "James K. Huggins" <[email protected]>
Subject: Costs of 1999->2000 date fix

The Sunday, 3 December 1995 issue of the Detroit News included an article
discussing the projected costs of fixing current programs to correctly
handle dates in the year 2000.

This discussion has appeared many times in RISKS, but the article
provides some interesting numbers (both actual and projected).

Consumers Power Co. (a Michigan-based electrical utility) began the
conversion process two years ago and estimates it will take $20-45M (US) to
convert their 350 computer systems.  A spokesman for Consumers Power says
that the problem for them is quite serious; if not fixed, it could lead to
denial of service for its customers.  Blue Cross (a health-care provider),
which processes 100 million medical claims per year, has budgeted $3M for
this year alone to begin converting its systems.

The article gives a few other estimates, claiming that the average Fortune
500 company will spend $100M to convert their own systems.  Worldwide, the
cost is estimated at $300-600 billion.  [THIS IS A CORRECTION IN THE
            ARCHIVE COPY.  IT WAS INCORRECTLY "million" IN THE ORIGINAL.
            THE CORRECTION IS NOTED IN RISKS-17.52.  PGN]

Bill Schoen, a computer programmer from Ford who (the article states) was
the first to write about the 2000 problem in 1983, says he contacted every
Fortune 500 company about the problem at that time, but few were interested.
"The people that were in positions of power then were going to be retired
long before this problem kicked in.  They didn't care about a mess they
weren't going to be around to clean up."

--Jim Huggins, University of Michigan ([email protected])

  [This problem gives new meaning to "going out on a date"
  (which many systems will do on 1/1/00).  PGN]

------------------------------

Date: Mon, 4 Dec 1995 17:03:35 -0800 (PST)
From: Stanton McCandlish <[email protected]>
Subject: CD-ROM that hoses your hard drive

[From Cable Regulation Digest]

More powerful than a defective Pentium chip, it's the Intel CD-ROM!  Intel
developed a press kit that really has impact on reporters -- it cripples
their computers. In touting their cable modems at the Western Show, Intel
execs distributed a press kit with a CD-ROM demo of some of their modem
products. Unfortunately, cranking it up will change those pesky
configuration specs on your hard drive, leaving you unable to run other
programs. Staffers hurriedly posted signs in the press room warning that the
CD "will seriously damage your computer system. Please discard immediately."

Stanton McCandlish, Electronic Frontier Foundation
[email protected]    http://www.eff.org/"

------------------------------

Date: 4 Dec 1995 17:29:30 -0700
From: [email protected] (Tom Wicklund)
Subject: Re: Sex, Lies and Backup Disks

While it's well known that deleting a file from a disk doesn't really remove
the contents, I'm not sure if even a disk erasure program will get rid of
the file.

I remember a few years after the Nixon tapes and infamous erasure of some
chunk of the tape (20 minutes?).  I was talking to somebody working at a
research lab who said that according to their magnetic recording expert the
contents could have been recovered even if they were simply erased -- enough
signal would remain if you really wanted to get the information back.

The best advice if you're giving a computer disk to somebody else is to use
a new disk and don't use any sort of copy software which might get the
unused portion of the last sector of a file.

------------------------------

Date: Mon, 4 Dec 1995 10:27:53 +0100
From: [email protected]
Subject: How's your spell?

>    spellchecker -> spell checker

Although I knew there was a lot of fairy dust in Silicon Valley, I didn't
imagine it had been automated yet.  A spell checker checks spells. I don't
think that's what was meant.  Something that checks spellings is either a
`spelling checker' or, in the common Valley contraction, a `spellchecker'.

  [And RISKS has been checking for many a spell.  PGN]

------------------------------

Date: Sun, 3 Dec 95 09:20:18 -0500
From: [email protected] (Edward Reid)
Subject: Re: Spelling Correctors

Spelling deviations have always been funny. The little column-end snippets
in The New Yorker are often based on spelling deviations, and have been
since long before computers were involved. An old favorite of mine
substituted Datsun for dachshund.

The exact nature of the errors has changed, since computers check spelling
differently from the human mind. The old deviations, like the example above,
were often based on phonetic similarity whereas the new deviations are
related to digital manipulation of the letters. But aren't we just
continuing our fascination with spelling rather than discussing any risk of
computer use?

Our society is obsessed with "correct" spelling. This obsession didn't exist
400 years ago. If an writer changed a spelling, well, that was a new usage.
I've heard it claimed that Shakespeare spelled his own name more different
ways than anyone else in history.

I pick up spelling easily. This is a blessing and a curse.

In the current environment, where "correct" spelling is considered
important, I can produce a document with no spelling deviations beyond an
occasional typo. But when I read a document that is well written but marred
by significant deviations from standard spelling, my mind is drawn to the
spelling to the detriment of my ability to absorb the content. At such
times, I often wish that I and others had a more accepting attitude toward
spelling deviations.

Perhaps the risk is that computers have given us another basketful of
examples to feed an unhealthy fascination with spelling.

Edward Reid

  [Similar items have appeared in RISKS in the past.  However, this isn
  clearly a serious risk, and deserves repeating for new readers.  PGN]

------------------------------

Date: Mon, 04 Dec 95 13:10:32 PST
From: [email protected] (Malcolm Farmer)
Subject: New software that is just too clever

Jeffrey Sherman (RISKS-17.49) mentioned quirks of WordPerfect 6.1 being too
clever. Here's something we found a few weeks ago:

A colleague was trying to prepare a table for publication, listing locations
of particular parts of a gene in a long sequence.  He would type something
like '179-285' into the table boxes, only to find when he tabbed along to
the next box, that WP would replace it with -106.  He couldn't find any way
to persuade WordPerfect to stop treating the table box as a spreadsheet
entry, and being in a hurry to get the paper completed, wound up typing
these entries in the form '179 to 285', which finally persuaded WP to leave
them alone.

------------------------------

Date: Tue, 31 Oct 1995 07:59:25 -0800
From: Jonathan Thornburg <[email protected]>
Subject: Ambiguous abbreviation: what does "NCSA" mean?

The posting on "NCSA FireWallCon '96" in RISKS v17 n42 unintentionally
illustrates another (in)famous risk: ambiguous abbreviations.

When reading this item, I, and I suspect many others, assumed that "NCSA"
referred to the US National Center for Supercomputing Applications,
well-known for (among many other things) such widely-used software as NCSA
Telnet for PCs and more recently NCSA Mosaic.  Since NCSA have been a major
internet site for some years, their sponsoring a firewalls convention seemed
quite plausible.  I thought the various *.ncsa.com internet addresses given
were a bit odd, since NCSA always used to have *.ncsa.uiuc.edu addresses,
but I didn't pay too much attention to this.

It was only when reading the _last_line_ (the poster's signature, which
in fact our moderator usually trims) of the RISKS item that I discovered
that "NCSA" actually referred to the (presumably US) National Computer
Security Association!

The lessons to be learned?  Well, there are about 450K possible 4-letter
sequences, but a simple "birthday paradox" calculation shows there's a 50%
chance of duplication if we randomly choose 800 (!) of them.  (The nonrandom
letter-frequency distribution of real-world abbreviations just makes things
worse.)  So this sort of "name abbreviation ambiguity" is going to happen
more and more often.  About all we can do is to spell out key abbreviations
in the lead paragraphs of our postings.

- Jonathan Thornburg
 U of British Columbia / Physics Dept / <[email protected]>

------------------------------

Date: Sun, 3 Dec 1995 13:43:29 +0000
From: David Lifton <[email protected]>
Subject: Industrial espionage  0.5%

A snippet from the magazine "Information Management and Computer
Security" provides the following survey (orig. source: Survive):

Survey of Disaster Causes: Europe and USA 1980 - 1991

1.  Software problems  14.8%
2.  Fraud   11.1%
3.  Malicious Damage  10.2%
.
20.  Electromagnetic interference  1.3%
23.  Earthquake   0.8%
26.  Industrial espionage   0.5%

Does anyone know of any *published* industrial espionage (information
warfare) cases?

Another article by John Bates (Australia, KPMG 1995 Fraud Survey)
states that 7th on the scale of most prevalent fraud by management
was that 6% of information theft was perpetrated by management.

[email protected]   Lifton Research

  [There are quite a few *unpublished* ones.  PGN]

------------------------------

Date: Sun, 3 Dec 95 16:21:59 GMT
From: Pete Mellor <[email protected]>
Subject: Re: risks in medical equipment (Harvey, RISKS-17.50)

> ... because some technologies are socially defined as 'male' ...

Fortunately for the RISKS community, my partner has discovered the real
reason for people being intimidated by technology. She has observed that,
despite the fact that I am a computer professional, I seem to be totally
intimidated by digital technology in certain contexts. Specifically, I am
totally unable to recall which programs to use for different types of
clothes when putting a load of washing into the automatic washing machine.

She has thought long and hard about this and has concluded that the
difference is that most kitchen equipment comes in nice attractive light
colours whereas technologies that are socially defined as 'male' often
come in dark coloured or black boxes.

She has concluded that the simplest solution to enable me to do a load
of washing on my own is to paint the washing machine black.

Unfortunately, the correlation is not perfect. For example, I can never
remember how to programme the (black) video recorder to record a TV show
at a particular time when we are out, either.

A more in-depth study is needed. If the European Union could provide me
with funds for the purchase of a lot of advanced domestic gadgets in a
variety of attractive colours, I will undertake the necessary controlled
experiments.

Peter Mellor, Centre for Software Reliability, City University, Northampton
Square, London EC1V 0HB, UK    +44 (171) 477-8422   [email protected]

------------------------------

Date: Mon, 4 Dec 1995 11:28:32 -0700 (MST)
From: Bridget Moorman <[email protected]>
Subject: Re: Alarm and alarm-silencing risks in medical equipment

I read with interest the accounts of alarm silencing within the medical
environment.  In my position as a clinical engineer, I have spent many hours
evaluating medical equipment for purchase and have seen the same issues wrt
*all* medical equipment.

Specifically with infusion pumps - most do have some type of indication that
they are being operated on battery - sometimes it's a positive indication (a
battery icon lit up on the front panel) and sometimes it's a negative
indication (an AC plug lit up on the front panel).  Most pumps do have an
alarm to indicate that the battery is being depleted - usually a two stage
alarm - one which alarms half an hour before the battery depletes and then
one which alarms right before the battery depletes and the equipment powers
off (if it is not plugged in immediately).  There are also some battery
management menu items in later model pumps.  Lastly, most personnel are
trained in the use of the equipment - in fact because of some voluntary
regulatory requirements (Joint Commission on Accreditation of Hospitals -
JCAHO for short), hospitals are required to document that training has
occurred *and* prove proficiency in the use of the equipment they need to
use in the discharging of their duties.

Even with the above, occurrences like those described in the posts happen.
There are several things going on - first, in an ICU, a nurse is *bombarded*
with many different pieces of equipment, which for the most part have
slightly different operating/user interfaces and different purposes.
Mistakes are bound to happen.  Second, if a hospital has not standardized on
one manufacturer for a certain type of device, the user has yet another
interface/another combination to worry about - it's kind of like going
between MS-DOS, UNIX, Windows, Macintosh, etc - little things can make a big
difference (this is a big issue for me as I investigated medical device
incidents in the hospital - not to mention materials management wanting to
save money by standardizing on one manufacturer).

I know that in my current position, I'm trying to reduce the number of
interfaces that clinicians are dealing with - we routinely tell medical
manufacturers that we want fewer/smaller devices that have a single
interface - as well as integration between devices for better information
display - i.e. in one place.  I know that the future of medical devices will
eventually be some sort of transduction mechanism hooked to a PC or some
computer which does data gathering, processing and dissemination.  Until
then, we will have to deal with the onesies and twosies and incongruent mix
of technologies and concentrate on training, training and training along
with striving for standardization and reduction of different interfaces for
clinicians.

Lastly, what I find interesting is that we used to tell the clinicians that
the technology was not there to replace them - i.e., *look* at your patient
- the technology was merely there to assist them by giving them more
information.  Sometimes if the clinicians are pressed for time, they lose
that perspective - and yes, sometimes they are intimidated by the
technology.  I'd venture to say that producing medical devices is probably
one of the more demanding pursuits because of the regulatory jungle, the
nature of the business (saving lives) and the broad range of technical
skills of clinicians.

Bridget Moorman, Clinical Engineer, Kaiser Permanente

------------------------------

Date: Mon, 04 Dec 1995 08:57:35 +0100
From: [email protected] (Erik Hollnagel)
Subject: Re: Alarm and alarm-silencing risks in medical equipment

It is interesting to read the recent stories about the risks in silencing
alarms. The problem is well known in the process industry, but on a much
larger scale. Instead of having a single or a few alarms, there are hundreds
or thousands. A nuclear power plant can easily have several thousand alarms,
and one of the Norwegian off-shore installations is known to have about
15.000 alarms that go into the control room.

The reason for this proliferation is the use of information technology. In
the "old" days, alarms had to be announced via an annunciator tile, and
there was a limit to how many annunciator tiles one could put in a panel or
on a wall. Not so with computers. Alarms can be produced simply by writing
them to a screen, and the capacity of the screen is unlimited. That is,
assuming that the operators are able to scroll through all the alarms and
find the important ones.

This has led to the art of alarm filtering, which uses often complicated
logic to supress alarms that are not relevant at the moment. Note the use of
the term RELEVANT. This is where the problem lies, because it requires that
all conceivable situations - or at least all the important ones - can be
analysed in advance. Quite apart from the problem of software reliability
(of the filtering algorithms) this touches upon the problems of risk
analysis and interface design.

To return to alarm silencing, this is clearly a problem in control rooms as
well as in hospitals. It is a problem for the doctors as well as the
patients. When alarms occur frequently, as during an accident, various types
of visual and auditory signals are used to attract the attention of
operators. But in the medium and long term these signals are seen as
distracting (which they are) and useless (which they should not be) and are
hence ignored. This is a simple fact of human nature that designers should
acknowledge. The risks in designing alarm systems without considering the
users' ability to respond appropriately are clearly substantial. it is a
problem that is taken very seriously in the industries. But specific
spectacular examples of ignoring alarms are welcome.

One that comes to mind is the pilot's comment in the Lauda Air crash in
1991. According to the transcript from the CVR, the pilot said: "But, ah,
you know it's a - it doesn't really, it's just an advisory thing, I don't ah
-". This was in response to the cockpit warning light. Six minutes later the
plane had crashed.
                                                   Fax: +47.6918.7109
Erik Hollnagel, Ph.D., Principal Advisor, OECD Halden Reactor Project
P. O. Box 173, N-1751 Halden, Norway  +47.6918.3100; [email protected]

------------------------------

Date: Sat, 2 Dec 1995 15:32:01 -0500
From: [email protected] (Jay Harrell)
Subject: Re: Alarm and alarm-silencing risks in medical equipment

Cliff Sojourner's article (RISKS-17.50, in response to John Strohm's
article) struck a nerve with me too.  All the medical monitors are great
technology but apparently the tolerances are set for far too many false
alarms to be safe in the real world.  I suppose that each manufacturer wants
to be sure it isn't their equipment that fails to give a warning early
enough - that way a failure can be blamed on the person watching the
monitor, but the end result is that real problems go unnoticed for long
periods.

Last spring my two-month-old nephew was on a morphine IV pump after surgery,
along with all the various monitors and alarms Cliff mentioned.  The heart
rate monitor repeatedly alarmed and the nurses would come in and silence it,
telling us it was normal.  Unfortunately, the monitor didn't have another
alarm that said there was a real problem.  It turns out my nephew was
getting a morphine overdose and his heart rate really was getting
dangerously low.  This was only discovered after persistent nagging on the
part of the mother and grandmother who were there the whole night and
watched the monitor themselves.  Finally they got someone to pay enough
attention and he was transferred to ICU and antidote was administered.  It
was touch and go the next twelve hours or so, but he's fine now.  But I
wonder what we would have been told the cause of death was had there been no
family member there and awake.

As Cliff wrote:
> The risk here is that a real life-threatening situation would
> not be discovered in time to help the patient.  ...

In this case the situation was discovered in time, but without much margin.

Jay Harrell, Atlanta, Georgia

------------------------------

Date: Sun, 3 Dec 1995 16:08:49 -0500 (EST)
From: Kenneth Albanowski <[email protected]>
Subject: Re: Alarm and alarm-silencing risks in medical equipment

On 30 Nov 1995 [email protected] wrote about visual and audible
alarms used in an Newborn Intensive Care Unit, and how false alarms are
common, and how they are regularly silenced via a cut-off switch:

The initial problem of ignored alarms is of course quite troublesome, but I
wonder about a simpler but more insidious risk: are alarming noises
_harming_ the patients, simply by their existence? Certainly, they aren't
going to be bone-shaking klaxons, but if someone desperately needs rest, or
if their sensory nerves aren't yet fully formed, can we truly say that this
is a good environment for them to be in?

I feel safe in saying the current risk is minimal, and perhaps nonexistent,
but that isn't a reason to ignore the issue. The science of user-interfaces
is still in development, and we mustn't forget that a user is anybody in the
room, be they human or animal, asleep or awake, participants or not.
Learning when to avoid using a particular type of user-interface element
(like an audible alarm) is just as important as learning which elements
should be used in the first place.

Here, some obvious solutions present themselves: visible alarms (like
rotating police-car style lights) that are highly visible, but shaded so
they are only visible to people standing up. Such a light would be obvious
the instant you enter the room, but not disturb the patients. (A minor risk
here is that of creating problems for short, or wheelchair using, staff
members.)

In a different (and unfortunately much more RISKy vain) one could use an
entirely different precept of how alarms should be handled, and give staff
electronic devices (pagers, etc.) that will be activated by a central
computer on any alarm condition in their vicinity or area of interest.

Both lose the value (immediacy, direction, and information content) of an
audible alarm, but avoid the problem of alarming the folks who don't need to
be alarmed in the first place.

Kenneth Albanowski ([email protected], CIS: 70705,126)

------------------------------

Date: Mon, 04 Dec 95 13:03:42 EST
From: Steve Branam - Hub Products Engineering <[email protected]>
Subject: Re: Alarm and alarm-silencing risks in medical equipment

Unreliable warning systems have been problematic for centuries (hence the tale
of the boy who cried wolf). We quickly learn to ignore them so we can continue
with real work. However, the danger is that we get used to expecting them to be
unreliable.

While in high school I once borrowed a neighbor's family car. They told me,
"Oh, by the way, you can ignore that oil light, it's been on for ages." You
can guess what comes next. In the middle of the night as I am returning
home, the thing throws a rod, leaving a trail of engine parts on the road.
An indicator of imminent failure alerted them to a well-known maintenance
issue, yet they completely ignored it.

The risk is the age old one of complacency. People get used to complex
technological devices being flaky. When yet another device seems to be
acting up, they don't take it seriously, even when someone's life depends on
its proper operation. As an engineer, I find this behavior totally
incomprehensible, but it is something I have to expect from users.
Unfortunately, I can only go so far in preventing a user from doing
something stupid. Worse, I can never fully anticipate the ways in which the
system can be abused and misused. This seems to be more a problem of
psychology than technology. There is no bug for me to fix if a user chooses
to ignore an alarm condition and insists on continuing to run.

This problem is compounded by uncoordinated and unprioritized alarms. The
multiple systems monitoring a patient are all clamoring for attention as the
one thing that needs attending to. In reality only some of them may really
be pertinent, while others can be safely ignored for the moment. Imagine
working in an environment where a half dozen little crises seem to be
erupting every moment, despite the fact that everything is under control. It
makes it hard to tell when things go out of control.

Steve

------------------------------

Date: Sat, 2 Dec 1995 19:26:43 -0500
From: [email protected] (Robert J Horn)
Subject: Re: Alarm and alarm-silencing risks in medical equipment

The issue of alarms and alarm suppression in a hospital environment is
a rather complex one.  From time to time IEEE EMB (the bio-med
engineering magazine) discusses the problems involved.  There is a
fundamental conflict in many hospital settings:
 1)  There are a great many machines and sensors providing crucial
information, life-critical services, etc. in place.  Each of them is
designed with visible and audible warning and danger indicators of
various sorts.
 2)  The medical staff sometimes go into sensory overload with all the
noises.  This has led to dangerous situations.

This is a valuable area to study and pursue mechanisms for improvement.
Don't expect any simplistic answer to work.  For every situation where it is
obvious that a warning indicator would have saved a life, there is another
situation where that same indicator would cost a life.  Often you must place
this life-critical decision in the hands of the medical staff at the time.

R Horn   [email protected]

------------------------------

Date: Mon, 4 Dec 1995 16:56:36 -0800 (PST)
From: [email protected] (Rochelle Grober)
Subject: Re: Alarm and alarm-silencing risks in medical equipment

It appears that the medical equipment industry is traversing the same ground
with regards to audible alarms as the military and space industries crossed
decades ago.

In both the space and military applications, it has long been known that too
high a false alarm rate will cause human operators to either ignore alarms
from that system, or shut them off.  In neither industry was this considered
a "good" thing, so most RFP's would specify the maximum false alarm rate
allowable for a system.  In some systems, this meant that some valid alarm
situations might go unannounced for a time, but the system designs usually
required that a real alarm situation would be identified within a specified
time.  These two (and, of course there were others) requirements led to both
hardware and software designs that attempt to filter out transient noise and
that also track suspect records over time to identify alarms in both noisy
and lossy environments.

In most instances, both hardware and software designs are modified to
provide the most capability in handling these problems.

In the case of some satellite communications, data frames could be marked at
transmitting end as to whether all the data in the various fields (i.e. from
the various instruments) was the latest and most likely reliable, or old (no
new info from last frame) and suspect.  On the receiving end, the entire
frame could be marked as to whether the broadcast was received complete,
partially or not at all.  This still presented too much alarm data, and so
the data is passed to diagnostic software that tracks states from frame to
frame, accumulating historical information and watching for thresholds and
other types of trend information to generate alarms at consoles for human
monitoring.  Of course, if done improperly, the system can miss alarms, but
the groups writing this software are supposed to thoroughly test it, and run
both filtered and non-filtered versions in simulation and early operations
to verify the software (along with all the other software verification
processes).

If the medical industry looked around at other safety critical industries
that had trod this path before, some of these risks from too frequent alarms
could be quickly reduced or eliminated.

But, I guess this is the case of the risk of those ignorant of history are
doomed to repeat it :-/

--Rocky

------------------------------

Date: 6 September 1995 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED info on RISKS (comp.risks)

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use a
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
DIRECT REQUESTS to <[email protected]> (majordomo) with one-line,
  SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:]
  INFO     [for further information]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks

RISKS ARCHIVES: "ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]
  Individual issues can be accessed using a URL of the form
    http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue]
    ftp://unix.sri.com/risks  [if your browser accepts URLs.]

------------------------------

End of RISKS-FORUM Digest 17.51
************************