Subject: RISKS DIGEST 17.56

RISKS-LIST: Risks-Forum Digest  Tuesday 19 December 1995  Volume 17 : Issue 56

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Navy hacked by Air Force (Darrell D. E. Long)
Japanese breeder-reactor incident (Chiaki Ishikawa)
Re: Montgomery County, PA, voting machines (Gary Greenhalgh)
Definitions for hardware/software relibility engineering
 (Meine van der Meulen via K. van de Wetering)
Medical Diagnosis by Computer (amplification) (Gretchen Herbkersman)
Pay online, release your SSN (Robert Mayo)
Indelible words (Brian Hawthorne)
Re: Another Sign Spoof (Don Root, Coleman)
Re: a well-managed risk (Andrew Koenig)
ABRIDGED info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Mon, 18 Dec 95 13:54:26 -0800
From: "Darrell D. E. Long" <[email protected]>
Subject: Navy hacked by Air Force

http://www.telegraph.co.uk/et/

A few clicks and then the e-mail message entered the ship's control
system...

War of the microchips: the day a hacker seized control of a US battleship

BY SIMPLY dialing the Internet and entering some well-judged keystrokes, a
young US air force captain opened a potentially devastating new era in
warfare in a secret experiment conducted late last September. His target was
no less than gaining unauthorised control of the US Navy's Atlantic Fleet.

Watching Pentagon VIPs were sceptical as the young officer attempted to do
something that the old Soviet Union had long tried to do and failed. He was
going to enter the very heart of the United States Navy's warships - their
command and control systems.

He was armed with nothing other than a shop-bought computer and modem. He
had no special insider knowledge but was known to be a computer whizzkid,
just like the people the Pentagon most want to keep out.

As he connected with the local node of the Internet provider, the silence
was tangible. The next few seconds would be vital. Would the world's most
powerful navy be in a position to stop him?

A few clicks and whirrs were the only signs of activity. And then a
seemingly simple e-mail message entered the target ship's computer system.

First there was jubilation, then horror, back on dry land in the control
room at the Electronic Systems Centre at Hanscom Air Force Base in
Massachusetts. Within a few seconds the computer screen announced "Control
is complete."

Out at sea, the Captain had no idea that command of his multi-million-dollar
warship had passed to another. One by one, more targeted ships surrendered
control as the codes buried in the e-mail message multiplied inside the
ships' computers. A whole naval battle group was, in effect, being run down
a phone-line. Fortunately, this invader was benevolent. But if he could do
it ...

Only very senior naval commanders were in the know as the "Joint Warrior"
exercise, a number of experiments to test defence systems, unfolded between
September 18-25. Taking over the warships was the swiftest and most alarming
of the electronic "raids" - and a true shock for US military leaders. "This
shows we have a long way to go in protecting our information systems," said
a senior executive at the airbase where the experiment was conducted.

The exact method of entry remains a classified secret. But the Pentagon
wanted to the first to test the extent of their vulnerability to the new
"cyberwarriors" - and had the confidence to admit it.

Now they believe they know what they are dealing with and the defences are
going up.

 ET | Front | News | World | Features | Sport | City | What's new | Help | ET search | Gazette | Back

Reply to Electronic Telegraph - [email protected]
Electronic Telegraph is a Registered Service Mark of The Telegraph plc

------------------------------

Date: Tue, 19 Dec 1995 19:47:58 +0900
From: Chiaki Ishikawa <[email protected]>
Subject: Japanese breeder-reactor incident

Japanese breeder reactor, Monju had an accident in which the coolant Natrium
(Na, Sodium) spews out of yet to identified rupture in the secondary cooling
loop. It occurred early this month.

What surprised me was this:

The operator has been prepared for a major accident in which 150 cubic
meters of Natrium (Sodium) is breached and had some large-scale major
accident simulation and prepared the operator manual.  One of the accident
simulation used a real building and from what I read was a very solid
experiment.  So they were quite confident that they are ready for mishap.

On this particular accident earlier this month, the amount of sodium spewed
out was about 1 cubic meters.  Much smaller than the operator expected for a
major accident, and as a matter of fact, the sensors were slow to catch the
escaping sodium that filled the small room where the rupture occurred.
Because of this the operators on site were slow in finding that the sodium
indeed esacaped from the closed loop.  Thus the shutdown of the reactor
occurred about 1hour 40 minutes after the smoke-detecting fire alarm
sounded.  (The operator's manual stated if the sodium escapes, shutdown
procedures are to be taken. But it doesn't say exactly what to do if fire
alarm sounded in the house that houses part of the secondary loop.)

By reading today's newspaper Asahi Shimbun, I came to the conclusion that
the operating company was not prepared for the accident of this minor scale,
so to speak.

When we hear that someone was preparing for and ready to cope with an
accident of certain quantitative index, say, X, we naturally assume that all
smaller accidents with index x (< X) will be dealt with by that someone
without problems.  I.e.,

       All accidents  such that the index of severeness x is
               0 <= x < X
       they are ready! (or are they?)

This is NOT true. Come to think of it, this is obvious. I wonder why I and
many others was led to believe that the breeder reactor operating company
was ready for smaller accident at all.

This seems to me a lapse of judgement when we hear quantitative index for
accident severeness. We should probably never trust qunatitative index when
we talk about risks. This is because we can't tell if the all the accident
upto certain severeness is considered and/or if only certain sampled
accidents of certain severeness are considered.  (My English is not as good
as to convey the nueance, but I hope the readers of RISKS notice this
(almost) blind-faith or lapse of judgement.  At least, I didn't realize they
didn't pay attention to smaller accident!?)

BTW, spewed sodium (500 degrees C when they escaped the cooling pipe)
reacted with oxygen and water vapor in the air and punched holes in the air
duct and MELTED(!) catwalks. From the video taken by alarmed local city
officials who investigated the site past midnight after the accident, it
looks a real mess although the amount is indeed small and no major radio
active material escaped.

Chiaki Ishikawa  Personal Media Corp.   Shinagawa, Tokyo, Japan 141
[email protected]

------------------------------

Date: Mon, 18 Dec 1995 15:07:39 -0800
From: [email protected] (Gary Greenhalgh, Microvote)
Subject: Re: Montgomery County, PA, voting machines (Finegold, RISKS-17.50)

On November 7, 1995, we experienced several problems in Montgomery County
with both the electronic voting machines and software. However, none of
these problems affected the basic integrity of the system AS MONTGOMERY
COUNTY HAS NOW OFFICIALLY CERTIFIED ALL OF THE ELECTION RESULTS. IN
ADDITION, THE COUNTY HAS RECEIVED NO CHALLENGES TO THE ELECTRONIC VOTING
MACHINE RESULTS! First of all, it should be noted that the County, against
our advice, kept some 60 machines (Of a total of 900 purchased by the
County) as spare machines. These machines could have, easily, been deployed
to high turnout precincts which would have reduced voting lines
considerably. Second, there were NOT "Massive Voting Machine" problems as
reported by the press. Of the 822 machines used on election day, we had 48
DOCUMENTED IN WRITING MACHINE service calls, most of which were handled
easily. In fact, our voting system EXCEEDED THE FEDERAL ELECTION COMMISSION
STANDARDS FOR AVAILABILITY ON ELECTION DAY! The "150 service calls" reported
by the press dealt with ALL questions received at the County's voting
machine warehouse regarding calls from precinct workers including ballot
questions, deployment of machines, etc.

In researching the above machine calls on election day, we have now
TENTATIVELY concluded that many of the service calls were "Power failure"
calls that involved an anti-moisture spray (Called a Conformal Coating)
which may have leaked into the contact area between the main controller chip
and the power board in certain machines. This spray is absolutely essential
to insuring equipment integrity, especially in areas of the country with
high humidity, so the spray or coating (The power boards are dipped in the
coating) has to be done.  However, THIS IN NO WAY AFFECTED THE INTEGRITY OF
THE VOTE COUNT. Indeed, there is a very simple procedure for precinct
workers getting the machine up and running again on election day. If this is
discovered as the main culprit, we will resolve this problem by "Cleaning"
all these contacts in each of the 8,070 machines now in use throughout the
United States.

As to the election night software results, please note, first of all, that
our unofficial election night results were being "Dumped" into the Microvote
Media Package that displayed the results, electronically, for the press. We
have now discovered that a software operator, in adding 9 machines to the
inventory at about l0:00 a.m on election morning, inadvertently added the
machines without assigning them to proper precincts and without
"Reinitializing" the system. THus, when we read some of the machine
cartridges from the machines, the first unofficial totals were posted
incorrectly. When we discovered the problem, we reinitialized the system and
the results were correct. Unfortunately, by the time we discovered the
problem, certain press reporters had already ran with the original
incomplete, unofficial, and inaccurate election results.

Microvote takes the blame in that our software should not allow an operator
to add voting machines in this fashion and we also take the blame for not
providing Microvote staff to monitor the election day process. Neither of
these will happen again.

------------------------------

Date: Tue, 19 Dec 1995 12:05:22 +0100 (MET)
From: [email protected] (K. van de Wetering)
Subject: Definitions for hardware/software relibility engineering

>FROM Meine van der Meulen  SIMTECH  kvdweter@simtech
I would like to draw your attention to a dictionary of terms I wrote:

Meulen, M.J.P. van der, Definitions for Hardware/Software
Reliability Engineers, ISBN 90-9008437-1, June 1995, 137 pages.

The book is a glossary of terms in the field of hardware/software
reliability engineering. It lists existing definitions from standards and
other sources, ordered in alphabetical order. Every definition is
accompanied by its source. The list of sources is in the back of the book.

The structure of the book enables various uses. When writing new standards
and reports it enables finding of existing definitions. Many terms come with
several definitions, and then comparison deepens insight. Of course, it can
also be used as a dictionary where unknown terms can be found.

The book has over 1700 entries, lots of them containing more than one
definition. As an example I give you the first entries for the letter F:

Fail-Safe The built-in capability of a system such that
   predictable (or specified) equipment (or service) failure
   modes only cause system failure modes in which the system
   reaches and remains in a safe fall-back state (IEC65A 122)
   (IEC65A 94). This is the capacity of the system to remain
   in a safe condition when a failure occurs or to skip
   directly into another safe condition (VDE3542) (VDE0801). A
   concept that defines the failure direction of a
   component/system as a result of specific malfunctions. The
   failure direction is toward a safer or less hazardous
   condition (CCPS). Design features which provide for the
   maintenance of safe operating conditions in the event of a
   malfunction of control devices or an interruption of an
   energy source (). The capability to go to a predetermined
   safe state in the event of a specific malfunction
   (ISA-dS84/16N). Pertaining to a system or component that
   automatically places itself in a safe operating mode in the
   event of a failure (IEEE Std 610.12-1990). A design
   property of an item in which the specified failure mode is
   predominantly in a safe direction (IEC1508).
Fail-Safe Shutdown The ability of a PC-system to have its
   outputs assume a predefined state within a specified delay
   after detecting the occurrence of a power supply voltage
   drop or an internal failure (IEC1131-1).
Fail Soft Pertaining to a system or component that continues to
   provide partial operational capability in the event of
   certain failures (IEEE Std 610.12-1990).
Fail-to-Danger An equipment fault which inhibits or delays
   actions to achieve a safe operational state should a demand
   occur. The fail-to-danger fault has a direct and
   detrimental effect on safety (CCPS). Design features which
   inhibit or delay automatic shut-down of a process on
   failure of a critical control device or on interruption of
   energy source. The fail-to-danger fault has a direct and
   detrimental effect on safety ().
Failure A system failure occurs when the delivered service
   deviates from the specified service, where the service
   specification is an agreed description of the expected
   service. A failure, in short, is the manifestation of an
   error in the system or software (IEC65A 122) (IEC65A 94).
   The termination of the execution of an established task
   from a unit as a result of a cause which is located in the
   unit itself and within the framework of the permitted
   working conditions (VDE3542) (VDE0801). A system failure
   occurs when the delivered service deviates from the
   specified service. The specified service is stated in the
   service specification and is an agreed description of the
   expected service (IEC65A 96). Termination of the ability of
   an item to perform its specified function. OR,
   Non-conformance to some defined performance criteria
   (Smith81). Behaviour of a unit which is carrying out some
   task which is not in accord with its intended or specified
   function (NTG3004) (TV86). The termination of the ability
   of an item to perform a required function (BS4778)
   (O'Connor81). The inability of a system or system component
   to perform a required function within specified limits. A
   failure may be produced when a fault is encountered
   (DO178b). A failure is the manifestation of an error in the
   system or software (). The delivered service deviates from
   the specified service (Shell94). The termination of the
   ability of an item or equipment to perform its required
   function. Failures may be unannounced and not detected
   until the next test or demand (unannounced failure), or
   they may be announced and detected by any number of methods
   at the instant of occurrence (announced failure) (IEEE Std
   500-1984) (OREDA84). The termination of the ability of an
   item to perform a required function or its inability to
   perform within previously specified limits (ISO7.30). The
   event, or inoperable state, in which any item or part of an
   item does not, or would not, perform as previously
   specified (MIL721). The inability of a system or component
   to perform its required functions within specified
   performance requirements (IEEE Std 610.12-1990). (1) The
   termination of the ability of a functional unit to perform
   its required function. (2) An event in which a system or
   system component does not perform a required function
   within specified limits. A failure may be produced when a
   fault is encountered (IEEE Std 729-1983) (IEEE Std 982.1-
   1988). A system failure occurs when the delivered service
   deviates from the intended service. A failure is the effect
   of an error on the intended service (IEC1508). A failure of
   any technical unit under consideration occurs if the
   permissible deviation from the performance target for this
   unit is exceeded (DIN25424).
Failure Analysis Subsequent to a failure, the logical
   systematic examination of an item, its construction,
   application, and documentation to identify the failure made
   and determine the failure mechanism and its basic course
   (MIL721).
Failure, Catastrophic See Catastrophic Failure.
Failure Cause The identified original cause of the failure; the
   circumstances during design, manufacturing, assembly,
   installation or use that have led to failure (OREDA84).
Failure, Common Cause See Common Cause Failure.
Failure, Common Mode See Common Mode Failure.

Cost of the book is NLG 150, excluding VAT and postage. The book can only be
obtained through my office.

ir. M.J.P. van der Meulen, n
Simtech b.v.
Oostmaaslaan 71
3063 AN Rotterdam
The Netherlands
Phone: +31-10-4244386  Fax: +31-10-4244253  Email: [email protected]

------------------------------

Date: Tue, 19 Dec 95 09:01:28 PST
From: [email protected] (Gretchen Herbkersman Dept 5428)
Subject: Medical Diagnosis by Computer (amplification)

Sorry, I thought I'd sent more info.  Here's a quote from the Wall Street
Journal, Monday, Dec. 18, 1995, page B1:

"The next time you visit the doctor with an aching back, it may be a
computer that decides you don't need tests and sends you home with
painkillers and a list of exercises.

"So-called smart software programs, packed with data on the treatment of
maladies from heart disease to depression, are the newest tools of
managed-care companies in their constant fight to control expenses.
Eventually there may be no arguing with them -- a doctor will follow the
computer's advice or find another health plan to work for.

"The programs are operated either by an HMO clerk who answers an 800 number
when your doctor calls to authorize treatments and tests, or, increasingly,
directly by doctors in their offices.  They are a powerful new tool for
deciding whether your case justifies what you're claiming."

As a sometime programmer and dabbler in databases, I find this news
exceedingly disconcerting.  It is the best reason I can think of for staying
well.  At any rate, please read the rest of the article.

------------------------------

Date: Mon, 18 Dec 95 17:10:26 -0800
From: Robert Mayo <[email protected]>
Subject: Pay online, release your SSN

If you pay a bill on-line using Quicken's On-Line bill payment, you wouldn't
suspect that you are giving your SSN out.  But you are!

If the merchant receives one check on a given day from Quicken users, your
SSN is safe.  But if they receive more than one check on a given day, they
get a listing of all the checks, each sender's bank account number, name,
and Social Security Number.  Having this information all in one place is
especially conducive to fraud.

Information is available at the URL: http://www.mc4.com/mayo/quick.html

--Bob [usual disclaimers omitted]

------------------------------

Date: Tue, 19 Dec 95 21:49:22 0500
From: Brian Hawthorne <[email protected]>
Subject: Indelible words

On Wed, 18 Sep 91 11:57:20 EDT I wrote to RISKS
(now archived in http://catless.ncl.ac.uk/Risks/12.36.html) a brief note on:

"The risks of a computer-based forum"

I said:

>Many people seem to approach e-mail and submissions to forums like RISKS as
>informal conversation. Given the persistence of the typed word, however, it
>may often be more appropriate to consider these forums as un-refereed
>journals.

Today, I did a search for myself on Digital's new search site
(http://www.altavista.digital.com).

Much to my surprise, one of the first items was a link to a posting from
1991 where I mused on the persistence of the typed word in on-line forums.

If you search for a person's name on the Alta Vista site, you can find all
the newsgroups they post to, as well as many past postings that have made it
to archive sites on the Web.

The risk? The same as it was in 1991, but magnified many fold.

------------------------------

Date: Tue, 19 Dec 95 01:23:45 PST
From: [email protected] (Don Root)
Subject: Re: Another Sign Spoof (RISKS 17.54)

I drive the subject segment of I-80 in Richmond, CA on a regular basis.
I suspect the Changeable Message Sign in question belongs to a contractor
involved with the I-80 reconstruction project.  If so, this sign is
trailer mounted, with provisions for a local key pad.  Other versions
of this trailer-mounted sign type have cellular phone modems for remote
message configuration.

The RISKs should be obvious...

Don Root  Calif. Office of Emergency Services  Telecommunications Branch

------------------------------

Date: Tue, 19 Dec 1995 12:12:00 +0000
From: [email protected]
Subject: Re: Another sign spoof (Levy, RISKS-17.55)

It's been my understanding that a lot of these sign actually have some sort
of a modem connected to a radio.  If you know the frequency and can transmit
data, you can control the sign.  Apparently, because most people would not
have this capability, no security was built into these type of signs.  Some
signs like "Traction tires required" are triggered in the same fashion
except just using DTMF tones to turn them on or off.

In either case, simple monitoring of these frequency can result in the
knowledge of what needs to be done to control them, or maybe just record
what was sent and play it back later.

------------------------------

Date: Mon, 18 Dec 1995 19:17:17 EST
From: Andrew Koenig <[email protected]>
Subject: Re: a well-managed risk (Whittle, RISKS-17.47)

Jerry Whittle points out that aircraft fuel gauges may be inaccurate.  I
must confess I didn't press on this particular issue, because (a) there are
several reasons the original estimates might be off and (b) there are
several ways to cross-check the estimates when determining (mid-flight) what
has happened.

For example, an engine might be burning more fuel than it should, but if it
is, one would expect that to show up in two places: the fuel flow gauge for
that engine and the total fuel remaining.  It might also show up in other
engine instruments.  Or the flight might be progressing more slowly than
intended because of unforecast headwinds.  But that would show up in
increased time between checkpoints, not to mention the inertial navigation
system.

In other words, the fuel gauges might be inaccurate, but if all the relevant
instruments agree, they're probably right.  Of course one should treat the
amount of fuel on board as being known more accurately than the gauges might
report.  Nevertheless, the procedure the crew actually followed must be
safer than just assuming that you still have enough fuel to get there
because you did when you left.

Jonathan Corbet says he would prefer it if the flight plan showed the
intended destination as the official one rather than the `virtual
destination.'  Then the crew reviews the relevant data mid-flight and
diverts if things don't look right.

In theory, of course, there's no difference between the two plans.  In each
case the crew takes off intending to land (for real) at the same place and
intending to divert to the same place if there is an anomaly in the fuel
consumption.  There is a difference, though, and that is in what happens if
for some reason a decision is not made or not communicated to the folks on
the ground.  In this particular case, for example, the decision to continue
to the intended destination is not entirely up to the flight crew: the
airline's dispatchers on the ground must agree as well.  I understand that
kind of thing is routine in airline operations and provides a way to ensure
that the decision was actually made and not just documented :-) In this
case, if for some reason the crew cannot reach the dispatcher on the radio,
then there is no decision and the flight must therefore continue to its
original `official' destination.

So the difference between the two strategies is that in the case I
described, the flight does the safe, inconvenient thing unless everything
can be proven to be working properly.  In the strategy Jonathan Corbet
describes, the flight does the convenient, potentially unsafe thing unless
something can be proven to be working improperly.  Those two strategies may
be the same in theory, but they sure differ in practice.

--Andrew Koenig   [email protected]

PS: This follow-up is very late for an amusing reason: I mistakenly sent it
to [email protected] instead of [email protected].  Normally I would have
found out about that immediately.  In this case, however, cri.com picks up
its mail by polling its service provider, and has not been doing that very
often.  As a result, nothing happened for a week or so and then I got back
automatic mail saying that my message had not been delivered and it would
keep trying.

So I compounded my error by trying to send mail to [email protected],
which had the expected effect: no response for a week and then another
automatic message.

Finally I mentioned the problem to Steve Bellovin, who said he had no
trouble reaching RISKS.  I sent him a copy of the bounce message and not
having my particular set of blinders, he saw the problem immediately.

------------------------------

Date: 6 September 1995 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED info on RISKS (comp.risks)

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use a
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
DIRECT REQUESTS to <[email protected]> (majordomo) with one-line,
  SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:]
  INFO     [for further information]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks

RISKS ARCHIVES: "ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]
  Individual issues can be accessed using a URL of the form
    http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue]
    ftp://unix.sri.com/risks  [if your browser accepts URLs.]

------------------------------

End of RISKS-FORUM Digest 17.56
************************