Date: Sun 8 Sep 85 00:37:04-PDT
From: Peter G. Neumann <[email protected]>
Subject: RISKS-1.7, 08 Sep 85
To: RISKS: ;

RISKS-FORUM Digest       Sunday, 8 Sep 1985      Volume 1 : Issue 7

       FORUM ON RISKS TO THE PUBLIC IN COMPUTER SYSTEMS
                Peter G. Neumann, moderator

Contents:
 The risks of not using some technology (John McCarthy)
 More on SDI (Joseph Weizenbaum)
 SDI reliability (Martin Moore)
 Re: Hazards of VDTs and CRTs (Bernie Elspas)
 Viruses, Trojan horses, and worms (Fred Hapgood, PGN)
 Re: The Case of the Broken Buoy (Herb Lin, Matt Bishop)
 Re: Hot rodding you AT (Keith F. Lynch)

(Contributions to [email protected], Requests to [email protected])
(FTP Vol 1 : Issue n from SRI-CSL:<RISKS>RISKS-1.n)
----------------------------------------------------------------------

Date: 07 Sep 85  1329 PDT
From: John McCarthy <[email protected]>
Subject: The risks of not using some technology
To:   [email protected]

       The problem with a forum on the risks of technology is that
while the risks of not using some technology, e.g. computers, are
real, it takes imagination to think of them.  A further problem
with newspaper, magazine and TV discussion of technology is that
journalists and free-lance writers tend to run in intellectual
mobs.  This biases the discussion for everyone, especially when
the same journalists read each others writings and call it public
opinion.  Here are some illustrations.

1. Suppose some organization manages to delay interconnecting
police data systems on some specious civil liberty grounds.
Suppose some wanted murderer is stopped for a traffic offense
but not arrested, because he is wanted in a jurisdiction not
connected to the computer system used by the police officer.
He later kills several more people.  The non-use of computers
will not be considered as a cause, and no-one will sue the
police for not interconnecting the computers - nor will anyone
sue the ACLU.  The connection will not even be mentioned in
the news stories.

2. No relative of someone killed on U.S. 101 during the 10 years
the Sierra Club delayed making it a freeway sued the Sierra Club.

3. No non-smoker who dies of lung cancer in an area newly polluted by
wood smoke will sue the makers of "Split wood not atoms" bumper
stickers.

***

Based on past experience, I expect this question to be ignored, but here's
one for the risk-of-computers collectors.  Is a risk-of-computers
organization that successfully sues to delay a use of computers either
MORALLY or LEGALLY LIABLE if the delay causes someone's death?  Is there
any moral or legal requirement that such an organization prove that they
have formally investigated whether their lawsuit will result in killing
people?  As the above examples indicate, the present legal situation
and the present publicity situation are entirely unsymmetric.

***

       Here's another issue of the social responsibility of computer
professionals that has been ignored every time I have raised it.

The harm caused by tape-to-tape batch processing as opposed to on-line
systems.

From the earliest days of commercial computing people have complained
about seemingly uncorrectable errors in their bills.  The writers
don't know enough to connect this with the use of tape-to-tape
batch processing.  Under such a system when a customer complains,
the person who takes the complaint fills out a form.  A key puncher
punches the form on a card.  At the next file-update, this card
goes to tape, and a tape-to-tape operation makes the correction.
If there is any error in the form or in the key punching, the
correction is rejected, and the customer gets the wrong bill again.
On-line systems permit the person who takes the complaint to make
the correction immediately.  Any errors in making the correction
show up immediately, and the person can keep trying until he gets
it right or ask for help from a supervisor.  Not only is the customer
better off, but the complaint-taker has a less frustrating job.

My own experience with the difference occurred in 1979 when my
wallet was stolen, and I had to tell American Express and Visa.
American Express had an on-line system, and the person who took
the call was even able to give me a new card number on the spot.
The Visa complaint-taker had to look it up on a micro-fiche file
and call back, and still they got it wrong.  They gave me a new
account number without cancelling the old one.

Perhaps this issue is moot now, but I suspect there are still
many tape-to-tape systems or systems using modern equipment that
still emulate the old systems.  Shouldn't computer professionals
who pretend to social responsibility take an interest in an
area where their knowledge might actually be relevant?

Once upon a time, beginning perhaps in the middle nineteenth
century, scientific organizations were active in pressuring
government and business to use new technology capable of
reducing risk and promoting the general welfare.  I have in
mind the campaigns for safe water supplies and proper sewage
disposal.  Here's a new one that involves computer technology.

Theft can be reduced by introducing the notion of registered
property.  When you buy a television, say, you have the option
of buying a registered model, and the fact that it is registered
is stamped on it.  Whenever someone buys a piece of used registered
property he has the obligation of telephoning the registry to
check whether the property with that serial number has been reported
stolen and recording his ownership.  Repairmen are also obliged
to telephone either by voice or by keyboard.

Unfortunately, too many computer people imagine their social
responsibility to consist solely of imagining risks.

------------------------------

Date: Sat 7 Sep 85 16:30:11-EDT
From: Joseph Weizenbaum <[email protected]>
Subject: More on SDI (reply to comments on RISKS-1.5 statement)
To: [email protected]

I've received a number of responses to the remark I made that I would not
support the SDI program even if I thought it could be made to work.  I have
the feeling that,  if I try to respond globally, a full blown debate
may ensue.  That I really don't want to conduct with the bboard as the
medium of expression.  Nevertheless, I feel obligated to say just a few
words in an attempt to clarify some ideas that have probably been
misunderstood.

I said that my attitude derives from what I called a "quasi pacifist"
position.  One writer thought that pacifists are opposed to all forms of
self defense.  Actually pacifists are often the first to come to the
defense of justice being trampled.  But the form of their resistance to
wrongs is non-violent.  It ought also not to be confused with "passive"
resistance - Gandhi often pointed out, usually by his own example, that there
is nothing passive about non-violent resistance.  My use of the term "quasi
pacifist" also elicited comment:  Am or am I not a pacifist?  Let me say I
strive to become a pacifist, to grow up to be one.  One isn't an adult by
virtue of merely wishing or claiming to be one.  Just so with being a
pacifist.  I am still far from the goal.

People apparently believe that, were the SDI technically feasible there
could be no reasonable objections to its development and deployment.
Wouldn't it be comforting if every region, every city and village in
America had, so to speak, an invisible shield over it which guarded against
the invasion of hostile missiles, they ask.  Speaking entirely in practical
terms, I would remind them that every year tons (perhaps kilotons) of
marijuana are smuggled past the U.S. Coastguard and the custom service.
Now that technical progress allows the construction of nuclear "devices"
smaller than a moderately sized overnight bag, a determined enemy could
destroy American cities without "delivering" war heads by air mail at all !
If I were responsible for national security, I would worry if, a few days
before the President's traditional State of the Union message, usually
delivered to the assembled leadership of all three branches of our
government, some foreign embassy evacuated all its personel.  Perhaps a
nuclear device of moderate size had made its way to Washington and is about
to decapitate the government.  We can no more bring peace to this globe by
putting impenetrable domes over nations than we can halt the violence in
our cities by providing everyone with bulletproof clothing.  Human problems
transcend technical problems and their solutions.

But suppose we could solve the smuggled bomb problem.

I would still oppose SDI.

SDI is an attempt at a technological solution to problems which have
their roots in and are social, political, economic, cultural, in other
words, human problems.  It is an attempt to find solutions under the the
light provided by technology when in fact we know them to reside only in
the human spirit.  That is what guarantees the failure of SDI more surely
than its complexity or the impossibility of its being tested.

Beyond all that is the fact that we live in a world of finite resources.
The scarcest resource of all is human talent and creativity.  The military
already commands the time and energy of most American scientists and
engineers.  Money is another scarce resource on which the military has first
call. On the other hand, social services of all kinds
are being cut back.  Meanwhile  the country faces social problems
of horrendous dimensions:  There is massive, deep poverty in the land.
Adequate health care is beyond the reach of millions of citizens and
ruinously expensive for many more millions.  The schools are spewing out "a
rising tide of mediocrity" while a huge fraction of our youth is functionally
illiterate.  The conditions that brought on the riots in American cities,
for example in Watts, have never been attended to - they silently tick
away, time bombs waiting to go off.

When resources are limited they must be distributed on the basis of a
widely based consensus on priorities.  To silently consent to lowering
still further the priorities our society assigns to the people's health and
education in favor of spending the billions of dollars required above and
beyond the already huge military budget for only the first stages of SDI,
is, it seems to me, to condone the continuing impoverishment and
militarization of not only America, but of the whole world. Ever more
scientists and engineers will be occupied with military work.  Ever more
industrial workers of many different kinds will be enmeshed in the
militarized sectors of society by, for example, being required to have
military security clearances.  There is a danger that, in the process of the
growing militarization of society, a certain threshold, hard to define but
terribly real, will be crossed and that, once crossed, there will be no ready
road back to a civilian society.

       Joseph Weizenbaum

------------------------------

Date: Fri, 06 Sep 85 14:54:52 CDT
From: mooremj@EGLIN-VAX
Subject: SDI reliability
To: risks@sri-csl

[Peter, I have also posted this to SOFT-ENG.  If you think the duplication
is reasonable, please include it in RISKS as well. -- mjm]

I've been thinking about the SDI system and how it will be implemented.
Specifically, I've been looking at a system composed of N independent
platforms, each of which performs its own detection, decision making, and
response.  Given this type of system, we can reach a few conclusions about
the reliability of the whole system, based on the reliability of a single
platform.  I've crunched a few numbers: nothing profound, just some basic
statistics.

Definitions:

1. A "false positive" is an attack response by a platform when such a response
  is not justified.
2. A "false negative" is failure of a platform to attack when an attack
  response is justified.

Let's look at the false positive case first.  How likely is the system to
experience a false positive, based on the probability for each platform?

            N:    50        100        200        500       1000       2000
Pp:         +------------------------------------------------------------------
 1.000E-12 |  5.000E-11  1.000E-10  2.000E-10  5.000E-10  1.000E-09  2.000E-09
 1.000E-11 |  5.000E-10  1.000E-09  2.000E-09  5.000E-09  1.000E-08  2.000E-08
 1.000E-10 |  5.000E-09  1.000E-08  2.000E-08  5.000E-08  1.000E-07  2.000E-07
 1.000E-09 |  5.000E-08  1.000E-07  2.000E-07  5.000E-07  1.000E-06  2.000E-06
 1.000E-08 |  5.000E-07  1.000E-06  2.000E-06  5.000E-06  1.000E-05  2.000E-05
 1.000E-07 |  5.000E-06  1.000E-05  2.000E-05  5.000E-05  1.000E-04  2.000E-04
 1.000E-06 |  5.000E-05  1.000E-04  2.000E-04  4.999E-04  9.995E-04  1.998E-03
 1.000E-05 |  4.999E-04  9.995E-04  1.998E-03  4.988E-03  9.950E-03  1.980E-02
 1.000E-04 |  4.988E-03  9.951E-03  1.980E-02  4.877E-02  9.517E-02  1.813E-01
 1.000E-03 |  4.879E-02  9.521E-02  1.814E-01  3.936E-01  6.323E-01  8.648E-01

Pp is the probability that a given weapons platform will experience a false
positive.  N is the number of platforms in the system.  The entries in the
table give the probability that a false positive will occur on at least one
platform (and one may be enough to start a war.)  For example, if there are
1000 platforms, and each one has a one-millionth (1.000E-6) probability of
experiencing a false positive, then the cumulative probability that some
platform will do so is 9.995E-4, or .09995%.  Looking at the table, I'd say
the numbers in the lower right corner are rather disquieting, to say the least.

Now let's look at the false negative case.  The table is structured a little
differently here.  In the false positive case, a single failure is disastrous;
in the false negative case, it's not.  The probability of a false negative
should be many orders higher than that of a false positive, simply because the
protections against a false positive will actually enhance the chances of a
false negative.  This table deals with a 100-platform system (that being the
most my binomial coefficient routine can handle).

  Pn:  .001    .01     .05     .1      .2      .3      .4      .5
N:  +---------------------------------------------------------------
30 | 1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  1.0000
35 | 1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  0.9991
40 | 1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  0.9824
45 | 1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  0.9991  0.8644
50 | 1.0000  1.0000  1.0000  1.0000  1.0000  1.0000  0.9832  0.5398
55 | 1.0000  1.0000  1.0000  1.0000  1.0000  0.9995  0.8689  0.1841
60 | 1.0000  1.0000  1.0000  1.0000  1.0000  0.9875  0.5433  0.0284
65 | 1.0000  1.0000  1.0000  1.0000  0.9999  0.8839  0.1795  0.0018
70 | 1.0000  1.0000  1.0000  1.0000  0.9939  0.5491  0.0248  0.0000
75 | 1.0000  1.0000  1.0000  1.0000  0.9125  0.1631  0.0012  0.0000
80 | 1.0000  1.0000  1.0000  0.9992  0.5595  0.0165  0.0000  0.0000
85 | 1.0000  1.0000  1.0000  0.9601  0.1285  0.0004  0.0000  0.0000
90 | 1.0000  1.0000  0.9885  0.5832  0.0057  0.0000  0.0000  0.0000
95 | 1.0000  0.9995  0.6160  0.0576  0.0000  0.0000  0.0000  0.0000
100 | 0.9048  0.3660  0.0059  0.0000  0.0000  0.0000  0.0000  0.0000

Pn is the probability that a given platform will experience a false negative.
N is the minimum number of platforms (out of 100) which respond correctly.
The table entries give the probability that at least N platforms respond
correctly.  For example, if the probability of a given platform experiencing
a false negative is 0.1 (10%), then the probability is 99.92% that at least
80 out of 100 platforms respond correctly, 58.32% that at least 90 respond
correctly, and so on.

Some of the Pn's and Pp's may strike you as much too high.  I don't think so.
The two tables were constructed on the simplifying assumption that Pn and Pp
are constants; actually, they are reliability functions.  The longer a platform
is in service, the more likely it is to malfunction.  If we assume that the
time-to-failure rate of a platform is some form of Weibull distribution
[a*B*t**(B-1) * e**(-a*t**B)], then the reliability function is given by Z(t)
= a*B*t**(B-1).  I did not use this in constructing the tables in order to
keep from drowning in figures, and because I don't really know how to choose
a, B, and unit t, until we get a history of actual performance (and by then it
may be too late...)  Suggestions are welcome.

                                 Martin Moore ([email protected])

------------------------------

Date: Fri 6 Sep 85 15:45:16-PDT
From: Bernie <[email protected]>
Subject: Hazards of VDTs and CRTs
To: [email protected]
RE:  RISKS contribution from friend@nrl-csr (Al Friend, Space and
    Naval Warfare Systems Command); RISKS-1.6

The 1981 FDA study cited by Friend probably contains much useful (albeit
rather "soothing") information about VDT *radiation* hazards (ionizing,
RF, and acoustic).  One should observe carefully, however, that the
quoted material fails to mention other kinds of hazards, nor does its
title reflect any others.  One should, therefore, not assume that
*radiation* hazards are the whole story for VDTs.  I would have felt
more relieved at the data presented had it included some other, more
obvious, risk factors such as visual effects.

In particular, recent studies show that at least two visual effects may
be quite important as factors producing severe eye fatigue.  The first,
visual flicker (resulting from the screen refresh rate), is probably
well understood (from extensive psychovisual experimentation in
connection with TV viewing).  The higher screen refresh rates used on
some computer graphic displays seem to minimize this problem.  However,
60 fields/sec (50, in Europe) is standard for most personal computers.

[Flicker depends on many factors: rate, ambient light, screen contrast,
brightness, subject motion, color, etc.  More seems to be known about the
conditions for minimal *perceptible* flicker than about those that can
produce visual fatigue, eyestrain, headaches, etc.  Also, there is a
fairly large variation among different subjects even for minimal
perceptible flicker, and flicker may be noticeable (and annoying) in
the "fringe visual field" (off to the side) even when it is not detectable
for the object directly ahead.]

The second factor is connected with the fact that the human eye is not
chromatically corrected, i.e., its focal accommodation is different for
different colored objects.  The result is that when the eye is focused
correctly on a blue object, a nearby red object will be slightly out of
focus.  One study [1] indicates that the discrepancy is about 0.5
diopters (for a viewing distance of 50 cm).  According to one report
I've seen (sorry, I can't find the reference!), this means that in a
multicolored display the eye will automatically be making rapid
focus adjustments in scanning the screen.  Even worse, the effect can
also exist in some monochrome displays, i.e., where the character
color (white, say) is achieved by a mixture of two differently colored
phosphors separated substantially in wavelength.  In the latter situation
it appears that (at least for some people) the eye may undergo extremely
rapid focus oscillations in the futile attempt to bring both component
colors into focus.   Quite understandably this may result in severe eye
fatigue, even though the subject may not be consciously aware of what
is happening.  This occurs mostly when the two phosphors radiate nearly
pure spectral lines.  Single-phosphor displays and those where the
component pure colors are close enough in wavelength seem not be prone
to this disturbing effect.  I recall seeing the statement that AMBER
displays are not objectionable for this reason, and that one nation
(Sweden or West Germany, I think) has specified amber displays for
extended-time industrial use.

It seems to me that the chromatic refocusing effect is probably the more
serious of the two cited, especially on high resolution displays.  The
fact that it seems not to have been noticed on conventional (analog)
color TV displays may be accounted for by their relatively poor
resolution (low bandwidth).  Thus, the brain expects to see a sharper image
on a high-resolution (RGB) display than on a conventional TV (where
everything--especially the reds and oranges--is pretty blurred anyway).

In summary, in concentrating on the "serious" potential hazards of
X-rays, etc., from VDTs, we should not thereby overlook the more obvious
factors concerned with the visual process itself.


1. G.M. Murch, "Good color graphics are a lot more than another pretty
  display," Industrial Research and Development, pp. 105-108 (November
  1983).

Bernie Elspas

[Material inside [...] may be deleted at editor's option.  Bernie]
                     [The editor decided to leave it in.  PGN]

------------------------------


Date: Fri 6 Sep 85 22:55:13-EDT
From: "Fred Hapgood" <SIDNEY.G.HAPGOOD%[email protected]>
Subject: Viruses, Trojan horses, and worms
To: [email protected]

       I would like to see a discussion by the members of this list
of the degree to which computer users, whether individuals or
organizations, are vulnerable to worms and Trojan Horses. These
terms, which first appeared in this list in #3, refer to programs
designed to inflict some form of unpleasantness on the user, up to
and including the destruction of the system. Typically they erase
all files in reach. I have read discussion, in Dvorak's column in
*Infoworld*, of the possibility that such programs might modify the
operating system as well such that when the unfortunate user tries
to restore the destroyed files from backup disks, those too would be
erased.  One can also imagine, vaguely, programs that are insidious
rather than calamitious, that introduce certain categories of error,
perhaps when strings of numbers are recognized. These might be able
to do even more damage over the long run.

       There are two issues with these programs. The first is what
they might do, once resident. The second is the nature of the
vector, to borrow a medical term. Worms can be introduced directly,
by 'crackers', or surreptiously, by hiding them inside a legitimate
program and waiting for an unsuspecting user to run that program on
his system, thus activating the 'Trojan Horse'. The article cited in
#3 had to do with a program camouflaged as a disk directory that was
circulated on the download BBSs. One could imagine a spy novel
devoted to the theme: perhaps it was the KGB, and not Ben Rosen, who
provided the money to launch Lotus. Inside every copy of 1-2-3 and
Symphony is a worm which, every time it is run, checks the system
clock to see if it was later than, say, October 1, 1985. On that
date the commercial and industrial memory of the United States dies.
The CIA suspects something is up, but they don't know what.
Unfortunately the director of the team working on the problem is a
KGB mole.  Fortunately there is this beautiful and brilliant female
computer genius ...

       Anyway, I have a specific question: can anyone imagine a
circumstance in which a program appended to a piece of text in a
system could get hold of the processor? It would appear not, which
is a good thing, because if such circumstances did exist, then it
would become possible to spread worms by pigyybacking them on a
telecommunicated piece of text. The right piece of text -- some
specialized newsletter, or even a crazily attractive offer from
a 'Computer Mall'-- might find itself copied into thousands of
systems. But I am not a technical person, and cannot establish
to my satisfaction that such an eventuality is truly impossible.

       Is it?

------------------------------

Date: Sat 7 Sep 85 23:59:24-PDT
From: Peter G. Neumann <[email protected]>
Subject: Re: Viruses, Trojan horses, and worms
To: SIDNEY.G.HAPGOOD%[email protected]
Cc: [email protected]

Absolutely not.  It is quite possible.  However, I can assure you that
this issue does not now include a virus -- although some message systems
tend to permit you to edit a message before resending it, with no
indication that it has been altered.  Thus, even in the presence of all
of those routing headers, you can never be sure you really have picked
up or been forwarded the original message.  The example of squirreled
control characters and escape characters that do not print but cause
all sorts of wonderful actions was popular several years ago, and
provides a very simple example of how a message can have horrible
side-effects when it is read.

Worms, viruses, and Trojan horses from their technical aspects are probably
best discussed elsewhere -- e.g., in SECURITY@RUTGERS.  (See also Fred
Cohen's paper in the 7th DoD/NBS Computer Security Conference in 1984.)
From the RISKS point of view, they are definitely important to this forum
-- and they present a very serious risk to the public.  PGN]

------------------------------

Date: Fri,  6 Sep 85 16:01:38 EDT
From: Herb Lin <[email protected]>
Subject:  The Case of the Broken Buoy
To: [email protected]
cc: [email protected]

   Did the NWS say that (ie, even if the buoy had been alive at
   the time, they could not have predicted the storm) in testimony,
   or after the verdict?  If after the verdict, no comment.

I believe it was during testimony, but I am not certain.

   But
   if as testimony, Herb, the jury (or judge) apparently didn't
   believe the NWS testimony.  If you believe the NWS claim, the
   headline was correct, but it's unfair to say the court ruled
   that way when it explicitly based its ruling on negligence.

But it is not clear that the court understands that the significance
of "missing data" is context-dependent.  Sometimes it matters, and
sometimes it doesn't.  This is a point that non-scientists have a very
hard time understanding.

I am not defending the NWS; they should have repaired the buoy.  But
given limited resources, how are they to set priorities in deciding
what to repair first?  The implications of the verdict are to me
frightening, placing NWS and all other similar organizations in a
double bind: all equipment must be functional even when they don't
have sufficient dollars to keep it that way.

------------------------------

Date:  6 Sep 1985 1359-PDT (Friday)
From: Matt Bishop <[email protected]>
To: Herb Lin <[email protected]>
Cc: [email protected]
Subject: Re: The Case of the Broken Buoy

   But it is not clear that the court understands that the significance
   of "missing data" is context-dependent.  Sometimes it matters, and
   sometimes it doesn't.  This is a point that non-scientists have a very
   hard time understanding.

At this point I'm going to bow out of the discussion, since I am not
familiar enough with the decision to know if the court understood that
point.  The NWS certainly should have made its position very clear, so
the court could make an informed decision (about whether or not
negligence was involved.)

------------------------------

Date: Fri,  6 Sep 85 09:24:39 EDT
From: Keith F. Lynch <[email protected]>
Subject: Re: Hot rodding you AT
To: [email protected]

   Date: Wed, 4 Sep 85 14:41:38 EDT
   From: Dan_Bower%[email protected]
   Subject: Hot rodding you AT

   In a recent issue of PC Magazine, Peter Norton espoused the idea of
   substituting a faster clock chip to enhance performance.  Now, according
   to the folk on the Info-IBM PC digest, this may create problems.  An
   off the shelf PC AT is composed of components guaranteed to work to
   IBM spec, e.g. 6 Mhz.  If I increase the clock rate, then the whole
   rest of the machine has to be up to snuff.  If not, a part dies and
   I pay a nasty repair bill.

   Now if I took Mr. Norton's word as gospel, swapped chips and set
   my PC AT on fire, would he be liable?  How about the publisher?

I doubt this would break anything.  The machine would simply cease
working above a certain speed, and resume working below that speed.

I know of a couple people who have done this on APPLE computers,
tried various speeds so as to run their machine at the highest speed
it will go.

Also, I once did the same thing with a synchronous link, i.e. hooked
up an external clock and cranked it up to the highest speed it would
work reliably at.

Also, I have done this with my Hayes modem.  The standard duration for
touchtone pulses is 70 ms.  The phone system here will accept as short
at 38 ms.
                                                               ...Keith
------------------------------

End of RISKS-FORUM Digest
************************

-------