Subject: RISKS DIGEST 17.23
REPLY-TO: [email protected]

RISKS-LIST: Risks-Forum Digest  Thursday 3 Aug 1995  Volume 17 : Issue 23

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Minneapolis homeless burn out US West Internet connection (Joyce K Scrivner)
A Monkey Wrench in Ford's Floppy Promotion (Edupage, Padgett Peterson)
Total surveillance on the highway (Phil Agre)
Computerized prognoses for critically ill hospital patients (Lauren Wiener)
Watch-ing the detectives (Mark Eckenwiler)
Intel-Hacking Conviction (Mich Kabay)
Re: Tenth Anniversary Issue (Mark Seecof, Dave Parnas)
Re: Limits to Software Reliability (Pat Place, D. King)
Call for Papers - 1996 IEEE Security and Privacy Symposium (John McHugh)
SEI Symposium: 1995 Software Engineering Symposium (Purvis Jackson)
Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc.

----------------------------------------------------------------------

Date: Wed, 02 Aug 95 16:15:00 CDT
From: "Scrivner, Joyce K" <[email protected]>
Subject: Minneapolis homeless burn out US West Internet connection

Sometime during 28 July 1995, a chilly evening, a group of homeless located
under one of the Minneapolis bridges started a fire.  This fire melted three
fiber-optic cables connecting some local systems to the Internet.  The media
(radio, television and print) reported all local access to the Internet was
gone, but phone service (including long distance) was still available.  I
didn't see any correction(s) reporting some Internet access was through
phone lines.  Nor did I hear what happened to the `homeless'.

  [Michael Ayers <[email protected]> noted an article in the *Minneapolis
  StarTribune* on 30 July 1995, which added that a copper cable also
  went out, and that most voice calls were rerouted automatically, but data
  transmissions were not.  PGN]

------------------------------

Date: Tue, 1 Aug 1995 21:22:31 -0400
From: [email protected] (Edupage)
Subject: A Monkey Wrench in Ford's Floppy Promotion (Edupage, 1 Aug 1995)

Ford Motor Co. decided its latest PR blitz would include a high-tech twist
-- a press kit on a floppy disk.  The only problem is, the disk contained a
"monkey virus," which, among other things, can make it appear as if all the
data's been erased from the hard drive.  "Just don't use it," says a Ford
spokesman, who couldn't explain how the disks could have become
contaminated.  Ford followed up by sending all recipients apologetic letters
via, you guessed it, snail mail.  (Tampa Tribune 31 July 1995, B&F2)

------------------------------

Date: Thu, 3 Aug 95 10:32:47 -0400
From: [email protected] (A. Padgett Peterson)
Subject: A Monkey Wrench in Ford's Floppy Promotion (RISKS-17.23)

This would be funny if not so sad & is just the latest of a long line
of manufacturers who have sent out infected disks. PB gave us the
MusicBug early in this decade (still have some sealed floppies if there is
any doubt). Intel gave us the Michelangelo.

The depressing fact is that the *easiest* thing to detect is a boot sector
virus on a floppy disk since it is always changes one specific sector that
should always be the same. Five years ago I wrote a generic freeware floppy
boot sector virus detector/restorer (FixFbr). It still works.

IMNSHO though, anyone who sends out a floppy disk to unsuspecting people
without knowing exactly what is on the disk (would be easy to do random
sampling and compute/compare every byte with a certified master - take about
five minutes per disk and would only require that one be checked from each
batch) has not exhibited "due care". Said this in print in 1992 and have
been saying ever since. Where is Ralph Nader when you need him ?

The RISK is not that there was a virus on the disk. The real RISK is
of a manufacturer does not know *exactly* what IS on the disk, the virus
is just a symptom of a much deeper problem.

CAVEAT: have not seen the disk (yet - would like one for my collection) but
       this note seemed to come from a reputable source. Tried calling Ford
       but no-one seemed to know what I was talking about. The fourth person
       did agree to try to find out & said they would call back. RSN. My
       opinion of any such incident still stands.

Padgett

------------------------------

Date: Tue, 1 Aug 1995 17:51:20 -0700
From: Phil Agre <[email protected]>
Subject: Total surveillance on the highway

A controversy is growing around the failure of "Intelligent Transportation
System" programs in the United States to exercise any leadership in the
adoption of technologies for privacy protection.  As deployment of these
systems accelerates, some of the transportation authorities have begun to
recognize the advantages of anonymous toll collection technologies.  For
example, if you don't have any individually identifiable records then you
won't have to respond to a flood of subpoenas for them.  Many, however, have
not seen the point of protecting privacy, and some have expressed an active
hostility to privacy concerns, claiming that only a few fanatics care so
much about privacy that they will decline to participate in surveillance-
oriented systems.  That may in fact be true, for the same reason that only a
few fanatics refuse to use credit cards.  But that does not change the
advantages to nearly everyone of using anonymous technologies wherever they
exist.

Let me report two developments, one bright and one dark.  On the bright
side, at least one company is marketing anonymous systems for automatic toll
collection in the United States: AT/Comm Incorporated, America's Cup
Building, Little Harbor, Marblehead MA 01945; phone (617) 631-1721, fax
-9721.  Their pitch is that decentralized systems reduce both privacy
invasions and the hassles associated with keeping sensitive records on
individual travel patterns.  Another company has conducted highway-speed
trials of an automatic toll-collection mechanism based on David Chaums
digital cash technology: Amtech Systems Corporation, 17304 Preston Road,
Building E-100, Dallas TX 75252; phone: (214) 733-6600, fax -6699.  Because
of the total lack of leadership on this issue at the national level, though,
individuals need to do what they can to encourage local transportation
authorities to use technologies of anonymity.  It's not that hard: call up
your local state Department of Transportation or regional transportation
authority, ask to talk to the expert on automatic toll collection, find out
what their plans are in that area, and ask whether they are planning to use
anonymous technologies.  Then call up the local newspaper, ask to talk to
the reporter who covers technology and privacy issues, and tell them what
you've learned.

On the dark side, here is a quotation from a report prepared for the State of
Washington's Department of Transportation by a nationally prominent consulting
firm called JHK & Associates (page 6-9):

 Cellular Phone Probes.  Cellular phones can be part of the backbone of a
 region-wide surveillance system.  By distributing sensors (receivers) at
 multiple sites (such as cellular telephone mast sites), IVHS technology
 can employ direction finding to locate phones and to identify vehicles
 where appropriate.  Given the growing penetration of cellular phones (i.e.,
 estimated 22% of all cars by 2000), further refinements will permit much
 wider area surveillance of vehicle speeds and origin-destination movements.

This is part of a larger discussion of technologies of surveillance that can
be used to monitor traffic patterns and individual drivers for a wide
variety of purposes, with and without individuals' consent and knowledge.
The report speaks frankly of surveillance as one of three functionalities of
the IVHS infrastructure.  (The others are communications and data
processing.)  The means of surveillance are grouped into "static
(roadway-based)", "mobile (vehicle-based)", and "visual (use of live video
cameras)".  The static devices include "in-pavement detectors", "overhead
detectors", "video image processing systems", and "vehicle occupancy
detectors".  The mobile devices include various types of "automatic vehicle
identification", "automatic vehicle location", "smart cards", and the
just-mentioned "cellular phone probes".  The visual devices are based on
closed-circuit television (CCTV) cameras that can seve a wide range of
purposes.

The underlying problem here, it seems to me, is an orientation toward
centralized control: gather the data, pull it into regional management
centers, and start manipulating traffic flows by every available means.
Another approach, much more consonant with the times, would be to do things
in a decentralized fashion: protecting privacy through total anonymity and
making aggregate data available over the Internet and wireless networks so
that people can make their own decisions.  Total surveillance and
centralized control has been the implicit philosophy of computer system
design for a long time.  But the technology exists now to change that, and I
can scarcely imagine a more important test case than the public roads.
People need to use roads to participate in the full range of associations
(educational, political, social, religious, labor, charitable, etc etc) that
make up a free society.  If we turn the roads into a zone of total
surveillance then we chill that fundamental right and undermine the very
foundation of freedom.

Phil Agre, UCSD

------------------------------

Date: Wed, 02 Aug 95 09:33:44 -0700
From: Lauren Wiener <[email protected]>
Subject: Computerized prognoses for critically ill hospital patients

The 31 July 1995 issue of _Forbes_ includes an article (pp. 136-7) on the
products of Apache Medical Systems, which predict patient outcomes based on
a database of "400,000 hospital admittances covering 100-odd diseases.  From
these statistics Apache's software can predict patient survival with an
accuracy that can *sometimes* beat that of doctors' hunches."  [fake italics
mine]

The software is intended to guide the doctor's choice of treatment.  Several
examples are given, include a rather chilling one in which the supposed
objectivity of the computer is enlisted to coax a husband for permission to
take his wife off a respirator and let her die.  The doctor who founded the
company (Dr. Knaus) is quoted as saying he created the system because "I
wasn't smart enough to figure out what to do in each situation."  Another
highlight: "Many hospitals adopted the Apache system to cut costs and
measure quality in intensive care units."

The article closes with a brief discussion of the ethical issues, in which
Dr. Knaus says: "If I were [the patient], I would want to be judged on Apache.
It knows only those facts that are relevant to my condition, not race or
insurance coverage, which have been used to allocate care in the past."

In other words, the computerized system is good because it is an improvement
over a deeply flawed, inequitable, and racist system?

------------------------------

Date: Wed, 2 Aug 1995 18:26:06 -0400
From: Mark Eckenwiler <[email protected]>
Subject: Watch-ing the detectives

*The New York Times* 2 Aug 1995 contains a story about the apprehension of a
homeless man as a murder suspect in NYC.  Apparently, a digital wristwatch
not belonging to the victim was discovered near her body on the floor of her
apartment.  NYC police found a number stored in the watch's memory, which
they identified as an account number at Banco Popular de Puerto Rico, a bank
with several branches in NYC.

Having previously been evicted from his home (his mother's residence), the
suspect was homeless and therefore not easily located.  Police solved this
problem by asking the bank to put a hold on the account, which showed a
history of monthly veteran's benefit check deposits.  The suspect showed up
at various branches to withdraw money, was turned away on each occasion, and
eventually appeared at one of the remaining branches (which was staked out
by the police).

He is now in custody, and was scheduled to be arraigned today on
charges of murder and attempted rape.

Mark Eckenwiler    [email protected]

------------------------------

Date: 31 Jul 95 23:09:52 EDT
From: "Mich Kabay [NCSA Sys_Op]" <[email protected]>
Subject: Intel-Hacking Conviction

>From the Reuters news wire via CompuServe's Executive News Service:

Computer expert convicted of hacking at Intel

PORTLAND, Ore., July 26 (Reuter, 26 Jul 1995) - A computer-programming
expert has been convicted of hacking his way through an Intel Corp.
computer network in what he claimed was an effort to point out security
flaws.  Randal Schwartz, 33, was convicted Tuesday on three counts of
computer crimes after a 2 1/2 week jury trial in Washington County Superior
Court.  [He was convicted of stealing passwords, and making unauthorized
changes in Intel network.]

 Comments from MK: Another story confirming the old principle that you do
 NOT attempt to improve security by busting it without getting _written_
 authorization from an appropriate officer of the organization.  This is
 known as the CYA principle.

M.E.Kabay,Ph.D. / Dir. Education, Natl Computer Security Assn (Carlisle, PA)

------------------------------

Date: Tue, 1 Aug 1995 13:53:01 -0700
From: Mark Seecof <[email protected]>
Subject: Re: Tenth Anniversary Issue

Okay, now that I've seen RISKS-17.22 I can't contain my desire to offer up
my opinion.  I've followed RISKS closely for 6 years; I paid less complete
attention before that.  I've learned a heck of a lot from RISKS; it has a
remarkable signal-to-noise ratio for which I thank PGN first (3 cheers for
our leader; hip, hip...) and the many wise and thoughtful contributors
immediately after.  But you see, it's not enough.  I mean, RISKS, like any
other voluntary, self-selected-participation communication forum, isn't
enough to counteract the problems in the industry.

Very many of the items and issues discussed in RISKS reveal that the
perpetrators of the problems either missed or ignored the lessons of those
who went before.  Remember the SGI development memo in RISKS-15.80?  Just
about every problem identified in that memo had been discussed in Fred P.
Brooks' _Mythical Man Month_ years before.  And the SGI folks had NO CLUE.
I could adduce hundreds of similar examples of people with "new" problems
documented in RISKS that were really variations on OLD problems.  Okay, so
some problems come up over and over.  But the shared experience of
practitioners (shared through books, articles, RISKS, etc.) ought to help
people (a) avoid, (b) mitigate or minimize, and (c) fix such problems, but
it seems like many people walk right past the bridges and jump into the
chasm.  Now, I'll admit that when someone *avoids* a problem or class of
problems successfully, that doesn't often get written up in RISKS.  Perhaps
RISKS suffers from the same "bad news sells" syndrome that politicians like
to denounce in newspapers.  But somehow (based on my personal experience in
the industry plus what I pick up...) I don't think reality is that simple or
that happy.

We need RISKS.  We need the books by Brooks, and Don Norman, and all the
others.  We need the articles, the conferences, and even the water-cooler
bull-sessions.

But what we really need is to figure out how to reach the practitioners who
aren't curious about these issues.  Perhaps we can try to influence the
early training of programmers.  Maybe every "Dumbkopf's Guide to C++" book
ought to have some RISKS horror stories in it.  Perhaps we could arrange
out-of-band rewards (ACM prizes?) for developers who do-the-right-thing and
encrypt their password files, use checksums to validate key entry of
patient-record-ID's, apply systematic design principles to their projects,
provide useful documentation, etc.

One factor that seems to promote a great many problems is schedule pressure.
Under schedule pressure developers stop building tools before production
parts, abandon formalized methods, choose not to implement RAS-critical or
testability items on the theory that they'll take too long, postpone
preparing documentation (often forever), and so forth.  We fancy-pants RISKS
readers know that these are often false economies.  Whatever system may
emerge from a panicked development will likely have many problems.  Worse,
we know that panic frequently will not shorten the delivery schedule, since
every foolish economy will be offset (sometimes exceeded) by cost (time and
money) to correct its effects.  We have to come up with good methods to
avoid problems that everyone believes will shorten schedules rather than
stretching them.  Most software developers use igh-level (level 3) languages
now.  Hardware developers use CAD/CAM systems.  Tools like these help
shorten schedules at the same time that they help developers avoid certain
types of problems (for example, a well-realized CAD system provide
simulation to make sure modules fit properly at interfaces).  In this vein,
we need to come up with tools and methods that minimize RISKS without
appearing to bulge schedules.  People will always blow off that NASA
waterfall chart when their software project is behind the calendar.  We need
to replace it with stuff they'll reach out for like a drowning man for a
life preserver.  (I'm only moderately hopeful here, because entropy suggests
that you can always deliver crap faster than quality... but we've gotta
try.)

The other RISK, not of poorly realized systems but of systems that do
hateful things (like, say, electronic payment systems that provide retailers
with detailed personal information about customers without their
(customers') knowledge or consent) we must approach in another way.  We've
got to distinguish between poor design (say, failure to use crypto to secure
cell-phone ID's) for which people are tempted to compensate with stupid
policy (e.g., private policy: restrict roaming; public policy: criminalize
possession of radio receivers capable of tuning cell-phone freqs) and
deliberately warped design (say, the Clipper chip) *caused* by bad policy
(private: kissing DoD/FBI heinie; public: attempting to spy on everyone).
Every scheme (of education, tool-building, whatever) we deploy against
ineffective development applies to the poor-design problem.  But I think we
need a different approach to the evil design problem...

Social pressure affects people a lot.  To avert evil system designs, our
best tactic is to promote an ethos that despises people who participate in
the design or construction of evil systems.  We need to cultivate cultural
pressure against such people.  We want people to internalize that ethos
(that is, we want them to find such work loathsome), and we want them to
externalize it (that is, we want them to actively scorn and revile people
who do such work).  The second, external, form of pressure will ensure the
adoption of the first, and help deter evil behaviour even from people who
have not internalized our ethos but who wish to avoid offending those who
have.  I suggest that we've been far too nice recently.  We practitioners
haven't properly ostracized the (for example) Dorothy Dennings of the world.

We need RISKS to keep the people who give a damn in touch.  We need RISKS so
people like myself (your humble contributor) will have something to learn
from.  And--going into the future--we need to work toward disseminating
methods and attitudes that will help avert and minimize risks.  I don't read
a single issue of RISKS without considering what (if any) lessons from it I
can use or share with colleagues.  I frequently excerpt (will full
attribution and a back pointer to the complete record) RISKS Digest for my
friends and coworkers.  And I strive constantly to promote better design
GOALS, better methods, and better implementation techniques for the projects
I work on.  (I don't claim consistent success, but I make my effort...).

The anniversary lesson is: since we keep seeing the same stuff come up with
only minor variations in detail, we've got to work these problems at some
more basic levels.  Reading and/or contributing to RISKS is voluntary.
We've got to *push* RISK awareness on other people.  Subtly, I hope; by
example, I hope.  But somehow.

------------------------------

Date: Wed, 2 Aug 95 09:05:30 EDT
From: [email protected] (Dave Parnas)
Subject: Re: Tenth Anniversary Issue

In RISKS' self-congratulatory TENTH ANNIVERSARY ISSUE Prof.  Peter Denning
wrote, "RISKS is vibrant, alive, and well after 10 years."  I am not so
sure.

The first step towards solving a problem is raising awareness.  In the case of
RISKS, it was important to make people aware of the risks of using computers.
As people became aware of the immense power of our technology, we wanted them
to understand that although computers were becoming larger, faster, and more
pervasive, computer systems were far from infallible and one can not
assume that they will function correctly.

Awareness is not an end in itself.  After awareness should come understanding.
We should be trying to understand why computers are so untrustworthy
that some of our leading experts avoid computer controlled aircraft, oppose
reliance on computers in nuclear plants, etc.

Even understanding should not be our final goal.  After understanding should
come a discussion of solutions.  We should be looking for ways to make
computers more trustworthy, to reduce the risks of using this highly
beneficial technology.

I think that the first goal has been achieved.  In addition to RISKS, which is
widely read by professionals, reporters, and users, we have seen the
publication of several very good consciousness raising books.  More and more,
we find that RISKS is simply distributing stories that have appeared in the
press somewhere.  Instead of getting direct reports from professionals
describing their own professional experiences, we are getting reports of what
they read in their local papers.  Some regular contributors seem to do nothing
but run a clipping service.  Frankly, after a while, many these stories sound
he same.  They are superficial, provide few technical details, and no new
insights.  RISKS is often quite boring.  There are a few detailed reports from
someone with professional understanding of the problem but they are often
buried among news clips.  Some writers add their own professional analysis and
occasionally we hear a new idea, but those are the exceptions.

It is time for RISKS to turn its attention to the next two phases.  We need to
try to understand why these errors keep happening and what we can do about
them.  We should leave the cute stories to the public press and move on.  Only
then, will I agree that RISKS is vibrant, alive and well.

Prof. David Lorge Parnas, Communications Research Laboratory, Department of
Electrical and Computer Engineering, McMaster University, Hamilton, Ontario
Canada L8S 4K1   905 525 9140 Ext. 27353

------------------------------

Date: Tue Aug 01 14:21:36 1995
From: Pat Place <[email protected]>
Subject: Re: Limits to Software Reliability (Mills, RISKS-17.22)

The example that Dick provides nicely illustrates the fallacy that software
is inherently less reliable than hardware.

Consider the situation where a system in operation (or test) fails.  We
observe the failure and may, if we are bright enough, be able to track that
failure down to some flaw. We can now use a very crude measure and
categorize the flaw: design-time or run-time.

Dick's example shows, to a reasonable level of confidence, that it is
possible to equate software and hardware systems at run-time.  If his
resistor simulation is sufficiently accurate, the software simulation will
suffer from run-time failures in the same way as actual resistors fail. If
not, we might have better run-time behavior in the software analog than we
do in the hardware system.

Both of the systems will suffer from the same design-time flaws. They
have to by the nature of the design of the experiment.

It is an interesting motherhood statement that software is inherently less
reliable than hardware. There must be some reason that this statement is
generally believed. The obvious reason is that our software systems have
more design time flaws than our hardware systems. I have two plausible
explanations for the greater number of design-time flaws in software.

1. We are designing far more complicated software systems than hardware
systems.
2. We are less careful designing software than hardware because the
perceived cost to change software is less than the perceived cost to change
hardware.

I suspect that it is a combination of these factors that leads to
more design-time flaws in software.

Pat Place   [email protected]

------------------------------

Date: Tue, 01 Aug 95 16:58:00 BST
From: [email protected]
Subject: Re: Limits to Software Reliability (Mills, RISKS-17.22)

The software in my home thermostat works just fine.  I would have expected a
mechanical clock to have broken down by now.

Large software systems of the kind whose reliability we discuss over the net
are seldom designed this way*.  It is not the case that there are several
hundred thousand copies of a few dozen basic modules that are connected
through narrow interfaces, with all of the complexity being in the topology
of the interconnections.  Instead, the interfaces are relatively fat, and as
the program grows in complexity it gets fatter and less well structured.
Imagine if you will a world in which you could add wires to any point inside
your 554 chips or resistors or transistors just by conceiving of a nice wire
to add.

Furthermore, the reliability traps in software systems are mostly design
faults, but the reliability traps in hardware systems are mostly component
failures, with some design failures thrown in.

Hardware reliability is an issue in software systems as well as in hardware
ones, of course, but the nature of software systems is such that you can gain
confidence in the reliability of your hardware by using it in unrelated
applications; this is only true at the component level in hardware systems.

The real reason why software appears to be not as reliable as hardware is
twofold:

 * the very malleability of software beguiles us into doing less thorough
   testing before fielding

 * the low unit cost of fielding** an additional unit of complexity induces us
   to design software systems with so many components that an analogous
   hardware system would have been unthinkable.

We should not say "software can't ever be as safe as hardware".

We should say instead "There is obviously a correlation between system
complexity and presence of design flaws.  Since the cost v. complexity curve is
steeper for hardware than software, there is also likely to be a correlation
between system complexity and system software content.  Therefore, there will
be a correlation between system software content and presence of design flaws,
but this in itself does not show causation."

-dk

* object oriented designs come closer to this methodology, which is partially
 why they gain a following in some circles.

** If you double the complexity of a program, you need to use a bigger disk or
  add a SIMM or two -- no big deal.  If you double the complexity of a piece
  of hardware, you are likely to double the fielding cost.

------------------------------

Date: Thu, 27 Jul 1995 16:39:25 -0700
From: [email protected] (John McHugh)
Subject: Call for Papers - 1996 IEEE Security and Privacy Symposium

  [Please contact John for the complete Call for Papers, submission
  details <papers due 6 Nov 1995>, program committee folks, etc.  PGN]

May 6-8, 1996, Oakland, California, sponsored by
IEEE Computer Society Technical Committee on Security and Privacy in coop
with The International Association for Cryptologic Research (IACR).

Focus this year: re-emphasizing work on engineering and applications as well
as theoretical advances, plus new topics, theoretical and practical.

Information about this conference will be also be available by anonymous ftp
from ftp.cs.pdx.edu in directory /pub/SP96, on the web at
http://www.cs.pdx.edu/SP96. The program chairs can be reached by email at
[email protected].

John McHugh, Program Co-Chair, Computer Science Department
Portland State University, P.O. Box 751, Portland OR 97207-0751, USA
Tel: +1 (503) 725-5842   Fax: +1 (503) 725-3211  [email protected]

------------------------------

Date: Thu, 27 Jul 1995 17:53:41 EDT
From: [email protected] (Purvis Jackson)
Subject: SEI Symposium: 1995 Software Engineering Symposium

Engineering the Future
11-14 September 1995
Pittsburgh, PA

 [VERY LONG message truncated; would not even fit in a RISKS issue!
 Please contact Purvis Jackson for Preliminary Program info.]

------------------------------

Date: 24 April 1995 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED Info on RISKS (comp.risks) [See other issues for full info]

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use a
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
REQUESTS to <[email protected]> (which is not yet automated).  [...]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks
  Individual issues can be accessed using a URL of the form
  http://catless.ncl.ac.uk/Risks/VL.IS.html  [...]

RISKS ARCHIVES: "ftp unix.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]

------------------------------

End of RISKS-FORUM Digest 17.23
************************