RISKS-LIST: RISKS-FORUM Digest  Thursday 15 March 1990   Volume 9 : Issue 75

       FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Contents:
 PRODIGY updating programs (Simson L. Garfinkel)
 Who shall guard the guards? (Robert A. Levene)
 Journalistic hacking (Rodney Hoffman)
 Caller-id by name (Gary T. Marx)
 Re: PSU Hackers thwarted (David C Lawrence)
 Re: Tracking criminals and the DRUG police-action (Brinton Cooper)
 RISKS of "Evolutionary Software" (Rajnish and Gene Spafford via Will Martin)
 Human-centered automation (Donald A Norman)
 Re: Airbus Crash: Reports from the Indian Press (Henry Spencer)

The RISKS Forum is moderated.  Contributions should be relevant, sound, in good
taste, objective, coherent, concise, and nonrepetitious.  Diversity is welcome.
CONTRIBUTIONS to [email protected], with relevant, substantive "Subject:" line
(otherwise they may be ignored).  REQUESTS to [email protected].
TO FTP VOL i ISSUE j:  ftp CRVAX.sri.com<CR>login anonymous<CR>AnyNonNullPW<CR>
 cd sys$user2:[risks]<CR>get risks-i.j .  Vol summaries now in risks-i.0 (j=0)

----------------------------------------------------------------------

Date: 12 Mar 90 20:44:07 EST (Mon)
From: [email protected] (Simson L. Garfinkel)
Subject: PRODIGY updating programs

I must take issue with Eric Roskos saying that PRODIGY can only update
information in the STAGE.DAT file.

In doing my article on PRODIGY for The Christian Science Monitor, I was told by
Prodigy's manager of software services that one of the really nifty tricks of
PRODIGY is that nearly the entire system running on the PC --- including the
EXE files --- can be updated remotely.  This eliminates the need to send out
floppy disks with updates.  (They didn't have it working well at the beginning
and actually had to send out one update --- an extremely expensive
proposition.)

The reason for wanting to do this is based on two things: prodigy's pricing
structure and its target market.  Prodigy charges one $9.95 a month.  At that
price, it just isn't possibly to economically send out floppy disks.
Especially if they want to have 1-5 million subscribers within the next 2-10
years.

The other thing is their target market: they want people who don't know
anything about programs or files.  Automatic updates eliminate the necessity of
having to have users put disks into their computers and try to figure out what
is going on.

The Prodigy censors is a very real problem.  They have recently shut down
PRODIGY groups that have ventured into "unacceptable" topics like abortion and
homosexuality.

------------------------------

Date: Wed, 14 Mar 90 13:00:49 EST
From: [email protected] (Robert A. Levene)
Subject: Who shall guard the guards? (was: Drive-by-wire cars)

In RISKS 9.74  Craig Leres <[email protected]> writes:

> ... Hopefully, auto manufacturers will be as conservative with drive by wire
> systems as they have been with the computer controlled engines they are
> currently building. For example, the engine in my '89 GM car has a computer
> that controls functions such as fuel delivery and ignition.  But nearly all
> the computer controlled systems have backups that implement the "limp home
> mode." ...

 Don't let an acute failure mode lull you into a false sense of security.
My 8500-mile '89 GM car is in a dealer's repair shop due to computer
failure.  If the emissions computer *fails*, the car will "limp home."
But if an erratic computer misinterprets the car's state, it will send
faulty control signals and cause unpredictable performance. "Hey, HAL - Can
you say 'Garbage In, Garbage Out?'"

 For two weeks, the computer failed intermittently, occasionally stalling
the car without warning - not a pleasant experience, especially when it
stalled while going 55mph on a 10-lane highway, and at a busy intersection
while making a turn.  After several such failures and three tows, the car's
computer finally failed for the mechanics, giving them the required
justification to replace the computer.

 The computer already has the capability to detect faulty sensors
throughout the engine.  A second, independent computer is needed to monitor
the performance of the engine computer (i.e., "guard the guards") in order
to detect and record intermittent failures.  At minimum, the car should also
have a manual override switch to enter "limp home" mode instead of the $45
"tow home" mode.

Rob Levene

------------------------------

Date: 11 Mar 90 18:24:20 PST (Sunday)
From: Rodney Hoffman <[email protected]>
Subject: Journalistic hacking

Summarized from a story by Sheryl Stolberg in the 'Los Angeles Times' 10
March 1990:

Fox Television employees in New York and Los Angeles discovered in February
that someone had been trying to gain access to their computers, using the
same password in both cases.

Free-lance journalist Stuart Goldman was arraigned Friday and charged with
violating federal and California anti-hacking laws.  His personal computer
and floppy disks were confiscated.  According to the federal prosecutor's
affidavit, Goldman made several attempts -- at least one of them successful
-- to gain entry to "sensitive data files regarding ... news stories worked
upon by the company's journalists."

Network officials would not disclose what information was sought,  but
Goldman had worked briefly for the Fox-produced news tabloid 'A Current
Affair', and he had recently been trying to sell an inside story on such
shows to the 'Los Angeles Times'.

[See 'Hacking for a competitive edge' in RISKS 8.71 for an earlier case of
journalists apparently trying to steal stories via computer hacking.]

------------------------------

Date: Tue, 13 Mar 90 14:48:59 EST
From: [email protected]
Subject: Caller-id by name

                       Newsday, February 27, 1990

          Don't Give Up Your Privacy to Find Out Who's Calling
                              Gary T. Marx

The telephone is something that is usually answered, but rarely questioned.
But this is changing with the proposal of the regional phone companies to
introduce unblockable Caller ID.

Almost everyone answers "yes" to the question "Would you like to know who is
calling you before you pick up the phone?"  But most people would answer "no"
to the corollary question, "Would you mind if every time you made a call your
phone number was automatically revealed to the person called?"

This offering, for which the phone company would charge a separate fee, changes
the nature of phone service by removing the control that callers have over
their phone numbers.  In other words, the service has consequences for the
person calling -- unlike other recent developments in phone service, such as
speed dialing or automatic redialing.

By technological fiat, the phone company takes personal information away from
the caller and sells it to the person called.  This is similar to the data
scavenging companies that sell credit and related information about individuals
without their consent.  Contrary to what most people believe, the phone
companies are saying they, not you, control your phone number.

Unblockable Caller ID is unlikely to become the standard in the United States.
California already has a law requiring that if the service is offered, it
should come with a free blocking option -- callers not wanting to reveal their
number can enter three digits and the called party will see a "P" for private
call.  Similar federal legislation has been introduced in Congress by Senator
Herbert H. Kohl (D-Wisc).

That is the compromise position.  Don't ban the service.  Don't offer it in an
unrestricted way as most phone companies propose.  Give callers limited control
over what is revealed - -their number, or the fact that they don't want to
reveal it.

While such a position is better than an unrestricted offering, it is far from
ideal.  Callers not wanting to reveal their number to the person called run the
risk of not getting through.  People receiving calls, seeing a "P" may decide
that if you won't identify yourself, they won't talk to you.  That is a
reasonable response on the part of the called person seeking to avoid unwanted
calls.  The caller appears suspect - -even though in most cases what callers
wish to keep to themselves is their phone number and perhaps location, not
their name.

The problem is that now, as proposed, the only form of identification the
technology delivers is the phone number.  If it were changed so that callers
had the option of delivering their names, most privacy problems would be
resolved.  In general it would also be more useful to the people called to see
a name rather than a phone number.  They needn't run the risk of refusing a
call from an unrecognized phone number that might in fact be a family member
calling from a service station to report the car has broken down, or a surprise
visit from a out-of-town friend.  This is also normal phone etiquette, which
begins with callers identifying themselves by name, not by telephone number.

One would hope that the phone companies, as publicly regulated monopolies would
feel an obligation to develop technical innovations that, beyond enhancing
profits, would further important social values such as privacy and equity.

At a minimum, their actions should not diminish these as unblockable Caller ID
will.  Investors, systems designers, and marketers need to consider how a
development might be misused, or have undesirable social consequences.

Services like Caller ID should be developed in consultation with citizen
advisory groups.  Consumers should not be put in the defensive position of
having to respond to whatever radical changes the local phone service proposes.

We live in a democracy, not a technocracy.  As networks become more important
and as invasive technologies more powerful, public utility commissions which
traditionally have focused on the economic aspects must look to broader social
aspects.  Too often, communications technology is seen only as something that
erodes, rather than enhances, privacy.  But that need not be the case.  Giving
callers the option of providing their name would serve the interests of both
the caller and the person called.

------------------------------

Date: Tue, 13 Mar 90 10:12:38 EST
From: [email protected] (David C Lawrence)
Subject: Re: PSU Hackers thwarted (Angela Marie Thomas, RISKS-9.74)

Just a couple of comments on this story.  These aren't criticisms but just
perspective observations; as professionals in all areas of life discover there
are frequently some differences of opinion about how matters of their field
should be presented to the general populace.  Recognising that these
differences spring up even between members of the field, these are my opinions.

  According to records, Clark is accused of running up more than
  $1000 in his use of the computer account.  Geyer is accused of
  running up more than $800 of computer time.

As a user of several systems that use pseudo-monetary accounting schemes, I
question whether any resources were really wasted at all, "computer time"-wise.
I do not know much about the systems in question, but if they parallel those
that I have had these experiences with then there were cycles to spare.  To me,
just handing out numbers with dollar signs attached seems to be attempting to
(either knowlingly or not; I do not know the author's experience, either)
garner a response of, "So much money!  The waste!  Clearly a terrible degree of
theft!"

  Among the systems accessed was Internet, a series of computers
  hooked to computer systems in industry, education and the military,
  according to records.

Everytime I see comment about the Internet like this it just makes me ponder
what the general public thinks.  Is it, "OH!  Some terribly important network!
Our national security might have been breached!"?  I do not mock the importance
of the Internet.  The reality of the Internet, though, is that access to it is
something that many people can get quite easily through their school or work
and the fact that the Internet was "Among the systems accessed" isn't a very
shocking thing.  Then again, perhaps it is shocking based on the system he was
coming from.  My comment, however, is based on what general reaction to the
above statement could be like -- the public won't know about the specifics of
that system either.

  Matt Crawford, technical contact in the University of Chicago
  computer department discovered someone had been using a computer
  account from Penn State to access the University of Chicago
  computer system.

Not to detract from Matt's work, but this paragraph essentially says nothing.
One of the reasons that these networks exist is so that people can do work at a
machine when they are half-way around the world from it.  There is nothing
especially surprising about someone from one university accessing a computer at
another university.  This is lacking in information and only makes me wonder
what we are supposed to infer from it.

Dave

------------------------------

Date:     Tue, 13 Mar 90 9:02:04 EST
From: Brinton Cooper <[email protected]>
Subject:  Re: Tracking criminals and the DRUG police-action (RISKS-9.74)


In Risks 9.74, J. Eric Townsend writes, in part:

>  "forecast crimes" -- could they have predicted the hit and run driver
>  who totaled my car and didn't even stop to check if my passenger and
>  I were injured?  Maybe they should try predicting crimes by politicians
>  and federate [sic] employes [sic] first, just to get the bugs out of
>  the system....

I presume he meant "federal employees".  In any case, his statement is an
unwarranted defamation of the character of literally millions of public
employees who work honestly, cheerfully, and carefully to give the taxpayer's
an honest day's work for a fraction of an honest day's pay.  As an academic, Mr
(Dr?) Townsend should be a little more objective in his characterization of
folks who work in other domains.
                                                 _Brint

------------------------------

Date:     Fri, 9 Mar 90 8:41:35 CST
From: Will Martin <[email protected]>
Subject:  RISKS of "Evolutionary Software" (Rajnish, Spafford)

The following is extracted from the latest issue of the Computers and
Society Digest. Please note the comments about this software "growing"
to become so complex that it can no longer be understood by its creators.
As soon as I read that, I thought of "Risks"!

Regards, Will Martin

 ***Begin Extract***
                The Computers and Society Digest, Volume 4, #9
                           Thursday, March 8th 1990

 From: "rajnish" <[email protected]>
 Date: Wed, 28 Feb 90 10:49:32 PST
 Subject: Evolutionary Software?

 I was wondering what people thought of the Darwinian software being created
 by the computer scientists at the University of California in Los Angeles?
 Their approach is pretty radical and they're creating more powerful and
 reliable software through self-evolution than their programmers can design by
 hand.

 I guess these small modular programs they have going interact and merge
 with each other to create new generations that can anticipate potential
 pitfalls that human programmers can't.  Just like we humans think about so
 many things at the same time in our head, the computer runs thousands of
 programs simultaneously and a master program picks out the ones that suits
 its' needs most efficiently, integrating it to produce following generations
 that are even more powerful.  Survival of the fittest?

 In Alameda and Orange Counties in California, an example of their Darwinian
 programming is helping the county to control their mosquito population.  Each
 of the individual program modules are able to successfully mimic the
 behavior of the mosquitoes to determine growth rate, etc., to find out
 precisely where and how much insecticide is necessary to kill itself and its'
 children.  Instead of the previous mass insecticide bombings at 20000 sites
 picked out by human programmers, this software is doing a near perfect job
 with only 3000 sites it picked out on its' own.  Pretty impressive, you think?

 The computer scientist who developed this approach to software design,
 Danny Hillis (Founder of Thinking Machines in Cambridge, Mass.) thinks
 because of its constant evolution that his software is eventually going to
 make itself so complex that even their designers won't be able to comprehend
 all of its' functions.  Kinda like becoming God?

 This guy even has software working like a biological parasite to wipe out
 incompetent programs and therefore forcing the master program to search for
 programs that are even better!  This parasite even looks around for viruses
 to kill.  Instead of taking the usual route, trying to simulate human
 qualities like vision and speech, Hallis' artificial intelligence is just
 trying to mimic unexpected behavior that all organisms exhibit and using a
 parallel supercomputer to accelerate the natural evolutionary process as
 defined by Darwin.

 Interesting, you think?  Rajnish

 - - - - - - - - - - - - - - - - - -

 Date: 3 Mar 90 23:30:30 GMT
 From: [email protected] (Gene Spafford)
 Subject: Re: Evolutionary Software?

 Danny Hillis presented his work at the 2nd Conference on Artificial Life,
 held in Santa Fe, the week of Feb. 4.  Lots of other interesting ideas were
 presented, too.

 The proceedings of the 1st conference have been published by
 Addison-Wesley.  The second set of proceedings will be published next year,
 also by Addison-Wesley.

 You can get more information about the conference by contacting Chris
 Langton @ the Santa Fe Institute for Non-linear Studies, (505) 667-1444.

 (I was there, talking about computer viruses as a form of artificial life.)

 Gene Spafford

 ***End extract***

------------------------------

Date: Thu, 15 Mar 90 08:10:33 PST
From: [email protected] (Donald A Norman-UCSD Cog Sci Dept)
Subject: Human-centered automation

RISKS 9.74 had a statement about the NASA human-centered aviation safety
project.   I don't know if I am in that project, but

 1. Ed Hutchins and I have a grant with NASA-Ames on aviation safety.
 2. I have strongly argued for user-centered system design (UCSD) in general.
 3. We are working on the checklist problem and on the automation problem.

So a quick summary of our work might be appropriate: it certainly fits the
domain covered by RISKS.

Automation.  I have been concerned with the fact that too many automatic
devices are built not only to take over the jobs performed by humans, but with
no understanding of the issues that will arise when they fail.  A simple
example is the Air China incident in which the number 4 engine lost power, and
the autopilot compensated without notifying anyone.  If the 1st officer had
been flying instead of the autopilot, he might have said "hmm, I seem to be
compensating more and more. Wonder what's happening?"  But the autopilot was
silent, and when the problem finally exceeded the control authority of the
autopilot, the result was an uncontrolled aircraft that almost was a total
disaster.

I have analyzed this an other aviation incidents in a tech report that is
available upon request.  Abstract at the end of this note.

Checklists.  In our opinion, a checklist is an admission that humans fail.
After all, if we didn't we wouldn't need to check.  Therefore, appropriate
design of equipment and procedures can probably eliminate or at least
reduce the need for checklists.

We don't need a checklist to ensure that we open the door before passing
through it -- unless it is a glass door.  We used to try to start our autos
without first inserting the key in the ignition switch, but now that the
starter key and the ignition key are the same, we no longer make that
error.  Wiener and Asani point out that pilots sometimes take off without
lowering their flaps (so there is a warning buzzer and it is a checklist
item), but they never land without lowering flaps, so this condition need
not be checked for. In similar fashion, the takeoff checklist has all sorts
of items on it, but NOT to advance throttles.  Yes it seems obvious, but
that is just the point.

Except for a study by Wiener and Asani that is just now being completed (for
NASA-Ames) there have been NO systematic analyses of the scientific/cognitive
bases of checklists.  They are now constructed by a combination of the
intuitions of chief pilots, experience, and the concerns of the legal staff.
Lists for the same aircraft in different airlines vary dramatically.  This is a
real safety hazard: read the NTSB reports on the Delta Dallas crash and the
Northwestern Detroit crash.

Hutchins and I are doing a cognitive analysis of checklists.  We will have
a paper "any month now."

Automated checklists can help, but

 1. They are not the final answer.
 2. Locating them on a front-panel CRT is probably the wrong way to go.
 3. They have to be designed with an understanding of human cognition.
 4. Checklists, procedures, and do-lists should probably be combined.

 ===
 Abstract of tech report

 The "Problem" of Automation: Inappropriate Feedback and Interaction,
                              Not "Over-Automation"

As automation increasingly takes its place in industry, especially
high-risk industry, it is often blamed for causing harm and increasing the
chance of human error when failures occur.  I propose that the problem is
not the presence of automation, but rather its inappropriate design.  The
problem is that the operations under normal operating conditions are
performed appropriately, but there is inadequate feedback and interaction
with the humans who must control the overall conduct of the task.  When the
situations exceed the capabilities of the automatic equipment, then the
inadequate feedback leads to difficulties for the human controllers.

The problem, I suggest, is that the automation is at an intermediate level
of intelligence, powerful enough to take over control that used to be done
by people, but not powerful enough to handle all abnormalities.  Moreover,
its level of intelligence is insufficient to provide the continual,
appropriate feedback that occurs naturally among human operators.  This is
the source of the current difficulties.  To solve this problem, the
automation should either be made less intelligent or more so, but the
current level is quite inappropriate.

The overall message is that it is possible to reduce error through
appropriate design considerations.  Appropriate design should assume the
existence of error, it should continually provide feedback, it should
continually interact with operators in an effective manner, and it should
allow for the worst of situations.  What is needed is a soft, compliant
technology, not a rigid, formal one.

Norman, D. A. (1990). The "problem" of automation: Inappropriate feedback
and interaction, not "over-automation". Philosophical Transactions of the
Royal Society of London, B (Paper prepared for the Discussion Meeting,
"Human Factors in High-Risk Situations," The Royal Society (Great Britain),
June 28 & 29, 1989.)

Don Norman, Department of Cognitive Science D-015, University of California,
San Diego, La Jolla, California 92093 USA

------------------------------

Date: Thu, 15 Mar 90 14:52:31 EST
From: [email protected]
Subject: Re: Airbus Crash: Reports from the Indian Press

>   A technical committee armed with comprehensive terms of reference
>   began a probe into the whole Airbus affair last week.  ...

Interestingly enough, it looks like somebody in authority at least suspected
that the results would be embarrassing to the airline (i.e. mismaintenance or
pilot error rather than technical problems).  Normally, in such an accident
investigation, the airworthiness authorities of the aircraft's country of
origin -- i.e., the people who first certified the thing as flyable --
are involved, and the manufacturer is at least kept informed.  Aviation
Week reports that India refused European airworthiness authorities' request
to participate, and also refused information requests from them and from
Airbus Industrie.

                                   Henry Spencer at U of Toronto Zoology
                               uunet!attcan!utzoo!henry [email protected]

------------------------------

End of RISKS-FORUM Digest 9.75
************************