Subject: RISKS DIGEST 17.50

RISKS-LIST: Risks-Forum Digest  Saturday 2 December 1995  Volume 17 : Issue 50

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Montgomery County, PA, experience with new voting machines (Leonard Finegold)
Sex, Lies and Backup Disks (Peter Wayner)
French civil servants paid twice (Pierre Lescanne)
Risk of gradual failure (Stuart Staniford-Chen)
AT&T Code Policies.  Hmmmm... (Pete McVay)
More on alarms and alarm silencing (Cliff Sojourner)
Re: risks in medical equipment (Bill Harvey)
Re: Is chip theft high-tech crime? (Jacob Kornerup)
Error Checking ('NEW should never abort!' and 'Writing solid code')
 (Randy Gellens)
More Microsoft Word Spelling RISKS (Eli Goldberg)
Re: Spelling Correctors (Alek O. Komarnitsky)
Re: Apple spellchecker (David Silbey)
Re: Spell-checking (Martin Minow)
Re: Spelling Correctors Self-Applied? Not in Microsoft Word (E Foley)
Re: Another Oakland airport radar outage (Risks from the Future?) (PGN)
ABRIDGED info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Fri, 1 Dec 95 17:07:11 PST
From: "Peter G. Neumann" <[email protected]>
Subject: Montgomery County, PA, experience with new voting machines

Leonard X. Finegold of the Physics Department at Drexel University sent me
issues of the West section of the Philadelphia Inquirer for the two
Thursdays (9 and 16 Nov 1995, p.W1 in each case) immediately following the
November 1995 election day.  I have distilled two long articles.

Pennsylvania's Montgomery County spent $3.8 million on new MicroVote voting
machines, and attempted to use them in a very-low-turnout off-year election
on 7 Nov 1995.  Unfortunately, things did not go as planned.  There were
massive voting-machine breakdowns, 150 service calls, long delays in
repairs, and phantom vote tallies (for example, 22,000 write-ins were
recorded for prothonotary).  There were some three-hour waiting lines to
vote, and many people left without voting.  Incomplete results were wildly
erroneous, suggesting the wrong candidate was winning.  Although test runs
had been successful, something about the county's cross-filed candidates
seems to have caused the software to fail unanticipatedly.  By 4am, the
apparently bogus write-in votes had vanished altogether.  When the smoke
finally cleared, the official results were apparently satisfactory -- and
unchallenged.  It was also a major gremlin-style event: printers, copiers,
elevators, even radios used by the repair crews failed.  (The MicroVote
system reportedly has been used successfully in counties in Kansas, North
Carolina, and Indiana.)

------------------------------

Date: Fri, 1 Dec 1995 15:23:13 -0500
From: [email protected] (Peter Wayner)
Subject: Sex, Lies and Backup Disks

An article in the 1 Dec 1995 *Wall Street Journal* (pg A14) should serve as
a stark reminder why people should make sure to overwrite their disks when
they delete files.  Jean Lewis is someone from the government who has made
some noise about something illegal connected with the Clinton
administration. (Details are too confusing for this short venue.) She's
apparently a Republican star witness. The Democrats apparently were given a
computer disk with important files on them. Unbeknownst to Ms. Lewis and the
Republicans, the disk also contained the text of a letter she wrote a friend
about her stepson. In the letter she compares him to Bill Clinton, whom she
calls a "lying bastard" for denying he slept with Gennifer Flowers.

The article says that Lewis had deleted the file long before she handed over
the disk containing other files to the Democrats.  Richard Ben-Veniste, a
lawyer for the Democrats, pulled out a copy of the letter during the hearing
and quoted the "lying bastard" phrase as proof that she was only on a
political vendetta when she uncovered her charges. Here's the article's
quote of Ben-Veniste:

  Apparently you didn't think it was on the disk but it was.
  That's the funny thing about these disks.  I don't understand how
  they do it, but they can find the stuff on the disk that nobody thinks
  is there.  I hate it when that happens, but here it is, Ms. Lewis.

What a town! They spend their waking hours trying to find out what other
people think of them. Then they get upset when it's not good. The moral:
investigate disk erasure programs that overwrite the disk a sufficient
number of times.

  [Also reported by Bill McGeehan <[email protected]>.]

------------------------------

Date: Thu, 30 Nov 1995 10:45:37 +0100
From: Pierre Lescanne <[email protected]>
Subject: French civil servants paid twice

Due to a computer mistake of Banque de France, civil servants and other
people paid by the French State were paid twice for the month of November
1995.  Apparently some others were paid not at all.

Pierre Lescanne

------------------------------

Date: Fri, 1 Dec 95 15:15:54 PST
From: [email protected] (Stuart Staniford-Chen)
Subject: Risk of gradual failure

I wanted to record a small risk here for posterity.  My workstation
contains a Pentium chip.  Since this chip generates so much heat, the
motherboard design incorporates a small (1W) cooling fan which sits
immediately atop the chip heat-sink.  This is in addition to the usual
power-supply cooling fan contained in most workstations.

Sometime in the last month or two, this small chip-cooling fan died.  The
only reason that this might be of wider interest is the symptoms this caused
in the computer.  What happened is that, over a period of weeks, the machine
became "flaky".  Early symptoms were occasional difficulty logging in or
launching applications.  Later on, I experienced strange events such as
compiler error messages complaining of non-existent instructions.  These
would go away if I attempted the compilation again.  Towards the end,
cutting and pasting between windows became unreliable (what was pasted would
differ in a few characters from what was cut).

For a long time, the machine was basically usable despite these integrity
problems - usable enough that I remained in denial about the need to take
time to investigate further.  The problems were initially rare, and became
more and more frequent over time.  It was easy at first to put the problem
down to software (though the lack of repeatability should have been a clue).
Finally, the situation became severe enough that I opened the box and
immediately found the burnt out fan.  I am guessing that the fan became
clogged with dust and gradually got less effective before dying altogether.
This would explain the slow worsening of symptoms.

The point is that the machine never gave any diagnostic, and never failed in
any repeatable or well-defined way.  It just gradually became less and less
deterministic in its operation.  The risks of this are obvious, I think.

Stuart Staniford-Chen, Dept of Computer Science, UC Davis, CA 95616
[email protected] w:(916) 754-8742 http://seclab.cs.ucdavis.edu/~stanifor

------------------------------

Date: Tue, 28 Nov 95 18:16:18 PST
From: Pete McVay <[email protected]>
Subject: AT&T Code Policies.  Hmmmm...

I recently lost my AT&T phone credit card, so I applied to them for a
replacement.  I was informed that they would have to change the PIN
(four-digit access code appended to my phone number on the card) and would
issue me a new one. I received the new card, with the new PIN, about a week
ago, and tried to use it today.  It kept rejecting the number as invalid.

I called AT&T, and because I was near the phone I use for billing, the help
desk assistant was able to confirm that it was me.  She tried my card and
my new PIN, and it didn't work.  She then gave me the PIN that worked--my
old one, which was supposedly replaced. She was in a very talkative mood,
and explained that this happened to her quite often.  If a PIN is changed,
in her experience, she had to wait at least a day before putting in the new
PIN, or it wouldn't take. I can think of a number of reasons for this
little bug, since their database is huge and distributed.  But that's not
the point here...

The help desk person said she has had this problem for at least six months,
and even though she has complained up the line, the waiting bug persists.
This waiting period also isn't documented, so inexperienced personnel tend
to make the mistake over and over, leading to many calls to her desk.

I'm not familiar enough with AT&T software methods to comment extensively
on the whys and hows of their coding policies, but it would seem to me that
their attitude to this (minor) bug is disturbing. The bug is obviously a
serious problem to customers, and is also easily fixed in a distributed
database environment.  At the very least, a lockout would prevent new codes
from being entered for 24 hours.  If their approach to this bug is in any
way indicative of their general policies, then I really wonder about the
stability of the rest of their code, and I see some of the well-publicized
phone outages "due to software bugs" in a new light.

------------------------------

Date: 30 Nov 95 14:09:00 -0800
From: [email protected]
Subject: more on alarms and alarm silencing

John Strohm's article about the risks of ignoring medical alarms in
RISKS-17.49 struck a nerve.

We had a similar experience.  Our newborn had to spend a few days in
the Newborn Intensive Care Unit (NICU).

All patients (babies) in the NICU are wired up with four sensors: heart
rate, breath rate, temperature, and blood O2 level.  The heart rate and
breath rate sensors were particularly unreliable, intermittently getting
good data and noisy data and no data.

The data monitors were configured to sound an alarm and flash the display
when any of the sensors was reading data out-of-range.  So far so good: an
alarm indicates potential trouble.

Problem is, the data from the sensors was so unreliable that the monitors'
alarms were always going off.  In fact, the entire 20-bed NICU was always
noisy with various alarms (patient monitors, IV pumps, etc.) going off.

What troubled me most was the NICU staff's response to the monitor and
IV pump alarms:  ignore them!  When the staff's rounds brought them to
the patient then they would press the "alarm-silence" button.

No one at the NICU had a good answer for me when I asked "why have audible
alarms if you always ignore them?"

The risk here is that a real life-threatening situation would
not be discovered in time to help the patient.  The usual
caveats about over-reliance on technology also apply.

Cliff Sojourner

------------------------------

Date: Thu, 30 Nov 1995 09:45:37 0000
From: "BILL HARVEY" <[email protected]>
Subject: Re: risks in medical equipment

John R Strohm's account of problems with morphine-delivery pumps in
hospitals shows the problems associated with alarms. But there's an
underlying issue.

My mother-in-law was recently hospitalised with what turned out to be severe
dehydration. She was put on an active (pumped) drip system. When we were
visiting, a young doctor made some adjustments to the controls, then left
the room. I noticed that the flow rate was now showing zero. When I queried
this with a nurse, we were told that "the doctor has adjusted the rate". We
pointed out that a dehydrated patient presumably should be getting some
fluid intake, but to no effect. After about five minutes, another doctor
passed by. When we spoke to him, he agreed with us and pressed the "start"
button on the pump.

Part of health service culture is that young doctors often make mistakes,
and experienced nurses can often fix their mistakes informally. I have seen
this happen when I've been a patient. But not in this case. I suspect the
difference is that this case concerned technology. I don't think the nurse
was very sure what the displays and the controls meant. Is this just a
question of training?

I'm reminded of studies which show that women are more intimidated by
digital technology than men. Not because of intelligence or ability, but
because some technologies are socially defined as 'male' - video recorders
and TV remotes being good examples, as well as computers.  In the hospital
context, I suspect technology is perceived as something *other* than normal
nursing care. It is owned by someone else. The other possible explanation is
that nurses are reluctant to question the authority of doctors, even if (to
a lay observer) the doctor has clearly made a mistake. Neither explanation
is very comforting.

Bill Harvey, Quality Assessment Branch, Scottish Higher Education Funding
Council  [email protected]  Tel: 0131 313 6513  Fax: 0131 313 6501

------------------------------

Date: Thu, 30 Nov 1995 11:41:43 -0600
From: Jacob Kornerup <[email protected]>
Subject: Re: Is chip theft high-tech crime? (Rosenthal, RISKS-17.49)

You are right on the semantics of "High-Tech" crime. The difference between
the food flavoring and RAM chips is that once stolen RAM chips are easy to
transport (how many K$ worth do you think fits into a suitcase ?) and they
are easy to sell because there is a large number of costumers for them.
There are even shops these days that sell "Grey Market" RAM chips, that is
chips that have uncertain origins. While most of these are legitimate, it
can be very difficult to assert the legitimacy of chips that are being
offered.

The term "High-Tech" comes from the Police Departments. Austin, Texas has
such a division that deals with everything from stolen RAM chips over
software piracy to hacker break-ins. It takes an investment of resources to
train police officers in the area of computers (this includes such mundane
issues as the current price of hardware) and it is a trade-off which crimes
to pursue. From a meeting with member of our local High Tech Crime Unit I
attended, it was clear that they would focus on the cases that had public
exposure. That is an armed robbery for chips or hard drives would get
serious attention, small scale piracy would not. They would spend time on
the internet looking for what looked like thieves trying to unload stolen
goods.

As for the "True High Tech" crimes such as intrusion they did tells us that
they would help the companies who had been victimized by investigating the
crime, but would not give us too many details. It was pretty clear that the
officers were not computer experts, but rather police officers exploring
new grounds.

The disturbing thing was that the local police officers could not get any
help from the FBI, apparently the FBI does not have a "High-Tech" crime unit
or at least not one that will cooperate with local authorities. The only
cooperation the local police could get was from similar units in other "High
Tech" cities like San Jose and Portland. Given the non-local nature of
"net-crime" this is disturbing.

Jacob Kornerup ([email protected])  Department of Computer Sciences
University of Texas, Austin     http://www.cs.utexas.edu/users/kornerup/

------------------------------

Date: 29 NOV 95 21:19
From: [email protected]
Subject: Error Checking ('NEW should never abort!' and 'Writing solid code')

In Risks 17:48, David Chase <[email protected]> wrote:

> So, I ask, those of you theorizing so confidently about code which
> recovers from errors, have you written such code?  Have you tested it
> thoroughly?  Do you have confidence that the OS code to handle this
> situation was tested thoroughly?  (If so, why?  Do you think people
> write and run this sort of code every day?  It's not exactly as common
> as opening and closing files.)

In my experience (with Unisys A Series systems), programs (both vendor
supplied and user written) make heavy use of error recovery facilities.
If this stuff breaks, we would know about it right away.  There are also
test suites for the OS error recovery facilities.

Of course, these systems handle errors and error recovery differently
from most.  That's one of the reasons they are so much fun to use.

As best I could make out, David was talking about stack overflow
conditions.  On the A Series, as long as there is an error handler
statement at a safe place (that is, not at the top of an already
overflowed stack), if a stack overflow happens, the stack is cut back,
and control passes to the error handling statement.  (The program can
get a hidden stack overflow, but the OS will just stretch the stack for
it.  Only if the stack can't be stretched will an actual stack overflow
condition exist.)


In Risks 17:48, [email protected] (Edward Reid) wrote:

> Security, in the broad sense, is a joint responsibility of the
> compilers, the instruction set architecture (ISA), and the operating
> system (MCP).  The compilers do not generate code which
> unconditionally violates security, but often the generated code is
> further checked by the ISA or the MCP.  For example, the ISA checks
> array bounds and prevents the code from accessing memory outside that
> allocated to the task.  The MCP manages file access security.

The A Series systems can best be thought of as object-oriented at the
hardware level.  Tasks (threads, stacks, processes), files, arrays, etc.
are all objects.  The compilers won't let you perform an improper action
on an object.  At a lower level, the hardware ensures that the operator
and the operand match.  (Can't do a normal store onto a pointer, for
example).  Programs can't access memory per se.  They can only access
objects (scalers, arrays, files, tasks, etc.)   A lot of the bugs people
fight with on other systems just aren't an issue.


In Risks 17:48, [email protected] (Thomas Lawrence) wrote:

> In my own experience, there are a great many assertions which are
> unacceptably expensive to check.

Many of the examples cited aren't more expensive on the A Series, for
reasons previously noted.  For example, if I declare a data structure
or object in a procedure, the compiler flags the block, and the when the
block is exited, the hardware calls the OS to deallocate the objects.


> Perhaps the best solution is to give 2 versions of the program to the
> end user.  One with debugging....  The other without....  Then let the
> user decide which to use.

Most A Series system software in fact comes in two flavors: regular and
diagnostics.  The latter typically includes many extra validity checks,
and trace functions.

Randall Gellens  Mail Stop MV 237   [email protected]  (714) 380-6350

------------------------------

Date: Thu, 30 Nov 95 09:29:43 -0500
From: Eli Goldberg <[email protected]>
Subject: More Microsoft Word Spelling RISKS

Peter Neumann's story about the Word spelling checker glitches brought back
memories of Microsoft's sales presentation for the (then new) Office 4.2 to
the chemical company that I interned at last summer.

Imagine over two hundred employees packed into an auditorium, with a
Microsoft employee at the front excitedly running through his
"gee-whiz-bang" software presentation of the virtues of the new Microsoft
Office.

The salesman started to demonstrate the "Autocorrect" feature in Word, which
automatically fixes the most common typos as soon as users make them.  It
also includes a feature allowing users to add their most common typos to the
list.

To demonstrate this, the presenter deliberately misspelled a word.  As I
recall, he mistyped "Personal Memo" as "Porsenal Memo".

The computer naturally flagged it as a typo.  Good so far.  To show Word
easily fixes such common typos, the salesman activated the "Autocorrect"
function to fix it.

Unfortunately, he entered "Personall" as the automatic correction that Word
should now use for "Personal"!  OOPS.  Wrong spelling.  But no problem.
Word gladly accepted it anyway.

He then proceeded to unintentionally demonstrate how Word can now
automatically add typos to all of his "Personall Memos".

After he concluded this segment of the presentation by proclaiming the
brilliance of the programmer's designers and asking us, "What do you
think...is Office a work of magic?", I nearly fell over laughing in my
chair.

In short, Microsoft's design implicitly assumed that users would be able to
spell words correctly in order to use their spell checker.  I suspect that
this may have been a questionable assumption.  But a damned funny sales
presentation. ;)

Eli Goldberg  Turner Broadcasting, QA Engineer

------------------------------

Date: Thu, 30 Nov 95 00:27:46 MST
From: [email protected] (Alek O. Komarnitsky)
Subject: Re: Spelling Correctors (RISKS-17.49)

The classic example (IMHO) is FrameMaker ... which flags the word
"Interleaf" and recommends you change it to "FrameMaker".

Yes, there is a "risk" there ... but I personally appreciate the programmers
who had a sense of humour here! ;-)

alek

P.S. I don't know if this is true in the latest 5.0 version.

------------------------------

Date: Thu, 30 Nov 1995 14:52:58 +0100
From: [email protected] (David Silbey)
Subject: Re: Apple spellchecker

1.  Macintosh, not MacIntosh.

2.  I wasn't aware that Apple made a spellchecker; an OS certainly, and
computers, of course, but the only spellcheckers I can think of would be the
ones made by Claris, which is an independent subsidiary of Apple.  A picked
nit, almost certainly.

3.  'Laserwriter' _is_ an Apple product, while 'Laserjet' is from
Hewlett-Packard.

David J Silbey     Duke University     [email protected]

  [Noted by [email protected] (Elliot Wilen), [email protected] (Robert Dorsett),
  and "Barrett P. Eynon" <[email protected]>, among others.
  Scott said correctly that the Laserwriter is an Apple product -- I
  goofed in placing the parenthetical.  For the record, his observation
  was that a product on an Macintosh system suggested the nonApple
  alternative.  PGN]

------------------------------

Date: Thu, 30 Nov 1995 09:53:03 +0100
From: [email protected] (Martin Minow)
Subject: re: Spell-checking (RISKS-17.49)

The spell checker on ClarisWorks 4.0 accepted the following sentence from
Scott Siege's item in Risks 17.32:
   The ...  spellchecker suggested changing "Laserwriter" to "Laserjet".
It offered the following suggestions:
   spellchecker -> spell checker
   Laserwriter -> LaserWriter
   Laserjet -> LaserJet
All of these seem reasonable to me.

Martin Minow  [email protected]

  [Good.  Things are improving.  PGN]

------------------------------

Date: Thu, 30 Nov 1995 10:43:26 -0400
From: [email protected]
Subject: Re: Spelling Correctors Self-Applied? Not in Microsoft Word

Allow me to point out that the Word spellchecker is actually a third-party
plug-in, which Microsoft buys from a well-known publisher of digital
reference materials. As with any good dictionary, the Word spellcheck module
does not incorporate words until they can be considered by the editors to
have become a part of the language. When Microsoft purchased this product,
probably in late 1993, most of these words were not part of the language or
did not have the currency they do today.

The Risks of such language databases are already well-mitigated, in my
opinion, by the capacity of Word and similar programs to add words to the
spellcheck database. They could be mitigated further by Microsoft offering
updates to the dictionary, perhaps for download on its website, but with the
feature of user customization, who would bother to do this? Would you?

I'm not a Microsoft apologist, but this is less a Risk than a potshot at a
popular target.

------------------------------

Date: Thu, 30 Nov 95 8:42:25 PST
From: "Peter G. Neumann" <[email protected]>
Subject: Re: Another Oakland airport radar outage (Risks from the Future?)

There was a persistent iterative off-by-one error in the three dates
mentioned yesterday in RISKS-17.49.  I made the following corrections in the
archive copy at FTP.SRI.COM.

The correct dates were simply one day earlier:

>The Oakland airport radar failed again on 28 Nov 1995 for about two hours,

>... There had also been a brief failure on 27 Nov 1995, ...

>[Source: *San Francisco Chronicle*, 29 Nov 1995, A13.

 Julian Elischer <[email protected]> noted the glitch and wondered if he
 could have had the stock market pages or race results from the 30 Nov
 paper on 29 Nov.

------------------------------

Date: 30 November 1995 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED info on RISKS (comp.risks)

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use a
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
DIRECT REQUESTS to <[email protected]> (majordomo) with one-line,
  SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:]
  INFO     [for further information]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks

RISKS ARCHIVES: "ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]
  Individual issues can be accessed using a URL of the form
    http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue]
    ftp://unix.sri.com/risks  [if your browser accepts URLs.]

------------------------------

End of RISKS-FORUM Digest 17.50
************************