Subject: RISKS DIGEST 17.28
REPLY-TO: [email protected]

RISKS-LIST: Risks-Forum Digest  Monday 21 August 1995  Volume 17 : Issue 28

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Russian Hackers (PGN and Christopher Klaus)
ATC glitches, continued (PGN)
Medicare leak through FOIA analysis and 9 digit ZIP (Quentin Fennessy)
Disabling technology? (Geoffrey S Knauth)
"Safeware: Systems Safety and Computers" by Leveson (Rob Slade, Nancy Leveson)
Re: Insisting on explanations (Julian Thomas)
Re: Intel Warns of Marred Motherboards (Dave Porter)
Re: Intel-Hacking Conviction (Steve Pacenka)
Re: Stale accounts and lifestreams (Paul E. Black)
Re: Netscape security (Harlan Rosenthal, Nevin Liber, Phil Koopman,
   Bernard Gunther)
Re: "The Net" and "555-xxx" IP numbers (Zygo Blaxell, Matthias Urlichs,
   Colin Plumb)
Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc.

----------------------------------------------------------------------

Date: Mon, 21 Aug 95 7:51:22 PDT
From: "Peter G. Neumann" <[email protected]>
Subject: Russian Hackers

Court documents were unsealed on 18 Aug 1995 that implicated Russian
computer hackers in about 40 transfers totalling more than $10 million from
the Citibank electronic funds transfer system, between June and October,
1994.  The hackers were caught as they were trying to move $2.8M.  The bank
indicated only $400,000 was actually transferred -- which at first reading
would seem to contradict the $10 million figure, except for the fact that
Citibank noted that none of its clients lost any money, and that all of the
transfers were either blocked or reversed.  Six people have been arrested.
24-year-old Vladimir Levin (who worked for AOSaturn, a Russian software
house, and who is currently under arrest in London) apparently had figured
out how to get around or through the Citibank security system.  [Source: An
Associated Press item in the San Francisco Chronicle, 19 Aug 1995, D1.]
Sounds like another case of reusable (fixed) passwords biting the dust?

  [Christopher Klaus <[email protected]> added the following info,
  based on a report of Voice of America correspondent Breck Ardery:
    The other five include two people in the U.S., two in The
    Netherlands, and one in Israel.  PGN]

------------------------------

Date: Mon, 21 Aug 95 7:59:32 PDT
From: "Peter G. Neumann" <[email protected]>
Subject: ATC glitches, continued

Radio communications between pilots and air-traffic controllers vanished for
one minute on 11 August 1995 (until the backup system could be engaged),
over a 200,000 square-mile area including all of Washington state and parts
of Oregon, California, Nevada, Montana, and Idaho.  This problem resulted
from a software glitch in a 2-mont-old $1.4 billion computer system at the
regional center in Auburn, Washington, and was disclosed on 16 Aug.  [This
relatively minor outage follows on the much more major problems in Chicago
(RISKS-17.21), Fremont (RISKS-17.24), and Miami problems (RISKS-17.26).]
``The FAA says the new system, which replaces one dating from the 1950s, is
more reliable and flexible, safer, easier to repair and provides better
voice quality when controllers talk to pilots.''  [Source: San Francisco
Chronicle, 17 Aug, A6.]

------------------------------

Date: Sun, 20 Aug 1995 09:35:01 -0500
From: Quentin Fennessy <[email protected]>
Subject: Medicare leak through FOIA analysis and 9-digit ZIP

I read an article on Medicare in the 20 Aug 1995 _Austin American-Statesman_.
The article was evidently done for the Cox Newspaper chain.  The article
talks of the deterioration of the service, and also touches on that fact
that a handful of doctors earn a disproportionate share of Medicare funds
paid out.

The article has a sidebar, which says, in short: Cox analyzed 100 million
computerized Medicare payment records for the report.  The information was
obtained via FOIA.  The doctors names were not released.  Evidently there is
an ongoing court case to release the doctors' names.  Cox was able to
identify some of the doctors.  The doctor's id codes were obscured by
Medicare, but the 9 digit zip codes of the doctor's offices were not.  Cox
was able to pinpoint individual doctors given this level of detail.

Risks: If information needs to be split into private and public components
then care needs to be taken for the job to be done correctly.  9-digit zip
codes divide the US into fairly small areas and so can (and have) given away
the store.

This is not to say that I think this Medicare information should be kept
secret.  However, 9 digit zip codes in databases can be used to pinpoint all
sorts of details about folks.

Quentin Fennessy  [email protected]

------------------------------

Date: Mon, 21 Aug 1995 13:36:20 GMT
From: [email protected] (Geoffrey S Knauth)
Subject: Disabling technology?

A close relative does cancer research at a very large company.  She marveled
when I described the power of the Web, but said she could never use it:
employees are prohibited from doing most useful searches, because of fear
that competitors might see activity and learn what they are working on.  So
much for enabling technology.

Geoffrey S. Knauth <[email protected]>  http://www.marble.com/people/gsk.html
Marble Associates, Inc., (617) 487-0050  CRASH-B Sprints, Cambridge Boat Club

------------------------------

Date: Fri, 18 Aug 1995 16:43:51 EST
From: Rob Slade <[email protected]>
Subject: "Safeware: Systems Safety and Computers" by Leveson

BKSAFWAR.RVW   950531

"Safeware: Systems Safety and Computers", Leveson, 1995, 0-201-11972-2, U$49.43
%A   Nancy Leveson
%C   1 Jacob Way, Reading, MA   01867-9984
%D   1995
%G   0-201-11972-2
%I   Addison-Wesley Publishing Company
%O   U$49.43 416-447-5101 fax: 416-443-0948 [email protected] [email protected]
%O   800-822-6339 617-944-3700 Fax: (617) 944-7273
%P   680
%T   "Safeware: Systems Safety and Computers"

Leveson has produced a thorough and broadly-based overview of the
literature, structures, and models of system safety.  Throughout, there is a
strong awareness of human factors and the human-machine interface (HMI).
There is also mention of the social, political, and economic forces driving
or impeding safety considerations.  Several studies cited indicate the
counter-intuitive result that safety is not a cost of production, but,
rather, that it actually improves performance.

Despite the title, and some sections specifically directed at software,
there is some difficulty in relating much of the material to the software
development process.  As Leveson, herself, points out, software design and
programming is generally unresponsive to engineering models of risk
assessment and quality control.  Those chapters with "software" in the title
show a marked decrease in citations.  (This undoubtedly represents the
actual state of the art, rather than any lack of research.)

(One micro-peeve is that although the bibliography has hundreds of entries,
the examples are limited to a few dozen, with a handful reiterated
throughout the book.  These detailed case studies are quite clear, but
additional incidents might have made the material both more interesting, and
more convincing.  Again, the examples have relatively little to do with
software.)

This is a very realistic analysis of the current state of risk assessment
and management, and of the social activity in relation to it.  The book
shows a society surrounded by accidents waiting to happen.  There are,
however, some directions which indicate hope for a safer future.

copyright Robert M. Slade, 1995   BKSAFWAR.RVW   950531

Postscriptum: Leveson has seen the draft of this review, and was surprised
at my statements regarding the lack of specific application to the software
development process and the limited number of examples.  While the points
she raised in her response are very important, I still find that my
impression of the book is unchanged.  However, this is only my opinion, and
based on subjective feelings.  I would like to repeat that the work itself
is of the highest quality, and useful to anyone concerned with developing
safe products.

[email protected], [email protected], Rob Slade at 1:153/733 [email protected]
Author "Robert Slade's Guide to Computer Viruses" 0-387-94311-0/3-540-94311-0

------------------------------

Date: Fri, 18 Aug 1995 17:46:12 PDT
From: Nancy Leveson <[email protected]>
Subject: Re: "Safeware: Systems Safety and Computers" by Leveson (Slade)

Rob, Thank you very much for your very nice review.  I do have a question
on two parts, though, that I don't really understand.

     Despite the title, and some sections specifically directed at software,
     there is some difficulty in relating much of the material to the
     software development process.

I'm a little surprised by this.  Part 4 (pages 225-500 of the main text) is
all about the software process except for one chapter on hazard analysis in
general, and there is a 40-page chapter on the software and system safety
process along with another chapter on management that includes process
information.  In addition, the first part of the chapters on software
requirements, software design, human-computer interface design, and software
verification of safety are devoted to the process related to that aspect of
development and how safety affects it.  Is there some part of the software
development process that I left out?  I purposely did not lay out one
process because one process does not fit every project.  Instead, I tried to
provide enough information that people could tailor the general process I
describe to fit their particular environment, project, personnel, and degree
of risk.

     (One micro-peeve is that although the bibliography has hundreds of
     entries, the examples are limited to a few dozen, with a handful
     reiterated throughout the book.  These detailed case studies are
     quite clear, but additional incidents might have made the material
     both more interesting, and more convincing. Again, the examples
     have relatively little to do with software.)

I also am unclear about what you meant here.  I have hundreds of software
examples throughout the book: I use all the known and documented software
examples about which anyone has any real information.  Peter Neumann's book
has a few more, but many of these are unrelated to safety (many are security
and privacy-related).  I have some that are not included in any of the other
recent books.

Because of my fear of doing harm (I waited 15 years to write this book
because I was worried that I might say something that might endanger
people), I did not include anything in the book for which I do not have
substantial evidence.  I read over 500 books and papers on safety and
engineering over the 7 years it took me to write this book, talked to
hundreds of engineers who build such systems, and worked myself on dozens of
safety-critical software projects in many different industries.  My fear of
doing harm is partially the reason for excluding rumored accidents that have
not been investigated or documented by someone who participated on the
project.  Chapter 3 includes a lot of evidence from the social science and
safety literature that shows that accidents are often blamed publicly on the
wrong factors for various psychological, sociological, and political
reasons.  There is agreement and carefully documented evidence on this
misattribution phenomenon.  And stories taken from newspapers are written by
reporters who are under time pressure to produce news immediately and not
wait six months for an investigation of what really happened.  Most of what
I have read in some recent books, bboards, and newspapers about accidents
that I am personally familiar with has been incorrect.  Misleading and wrong
information can be more dangerous than no information.

Perhaps the confusion arises from the appendices.  These were included only
because describing the better-known and researched accidents (about which we
have enough information to really learn something) in the main text would
have required people already familiar with these accidents, particularly
safety engineers, to have to wade through a lot of pages unnecessarily.
However, software engineers probably know little about them and need the
information to understand some of the text.  These well-researched accidents
are mentioned a lot in Chapters 3 and 4 on the general causes of
accidents---including organizational and managerial factors and human
error---because they are the only ones we have adequate information about
with respect to these particular factors.  The computer-related accidents
are spread throughout the book and not included in the appendices (aside
from the Therac-25) because everything we know about them can be described
in a paragraph or two.

A reason for using non-computer related accidents is that they provide
important information about what needs to be done when computers are
substituted for analog controllers.  Virtually all system safety engineers
agree that software-related accidents are not any different than those in
which computers are not used---software engineers are causing the same
accidents that other engineers learned how to avoid years ago.  They (and I)
feel that unless the software engineers learn those basic safety concepts
that have been accumulated over decades by engineers, we are going to repeat
the accidents of the past and kill thousands of people unnecessarily.

Nancy

------------------------------

Date: Sat, 19 Aug 95 07:13 EST
From: Julian Thomas <[email protected]>
Subject: Re: Insisting on explanations (Green, RISKS-17.24, etc.)

The experience with Dell brings to mind my experience a number of years back
with a bank when I did in fact receive a coherent explanation for a strange
entry on an annual mortgage statement.  I suspect that their DP manager must
have found it very difficult to write the letter that identified the cause
of the strange numeric entry as the result of having specified the account
number rather than one of the dollar quantities in a formula!

The risk: longwinded COBOL variable names, I suspect.

------------------------------

Date: Mon, 21 Aug 95 12:10:46 EDT
From: dave porter  21 Aug 1995 12:09:15 -0400 <[email protected]>
Subject: Intel Warns of Marred Motherboards (Edupage, RISKS-17.27)

Since I own one of these motherboards, I'm interested in this
bug.  I'm still trying to find out the exact details of the
failure mode.

However, the description quoted above doesn't quite reflect what's really
going on.  The workaround was a BIOS upgrade (which had the effect of
disabling some performance-enhancing feature in the RZ1000; this therefore
doesn't mean my criteria for "correcting" the flaw, but perhaps I'm just
fussy).

Naturally, changing the disk BIOS has no effect on operating systems that
don't use the BIOS for disk I/O.  I think this is what they mean by OS/2
"disabling" the patch, but their choice of words somehow seems to be blaming
OS/2 rather than the maker of the buggy board.

 [You mean, this is putting the buggy before the buggyman?
 Dave thought there might be a pun on "buggy whips", but
 "bogey MIPS" might be more like it.  PGN]

The RISK?  Oh, I suppose the risk is something to do with expecting
to get detailed, accurate, technical information about computer
products which are marketed to the technically naive.

dave

------------------------------

Date: Sat, 19 Aug 1995 17:00:54 GMT
From: [email protected] (Steve Pacenka)
Subject: Re: Intel-Hacking Conviction (Kabay, RISKS-17.23)

M.E. Kabay cited a lesson of the Randal "Perl" Schwartz conviction:

 Comments from MK: Another story confirming the old principle that you do
 NOT attempt to improve security by busting it without getting _written_
 authorization from an appropriate officer of the organization.  This is
 known as the CYA principle.

That's a starting point for debate in Usenet news that is extrapolating from
this case.  Two questions have been most interesting to me, as a person with
increasing LAN and Internet access admin responsibility.

* Under what circumstances should computer administrators go beyond their
authorized boundaries before consulting?  One participant cited an "act then
inform" example.  A glaring rlogin security hole that made his employer's
machines all dangerously vulnerable was being widely publicized online on a
weekend.  Going well beyond his explicit boundaries, he used the hole to
disable remote access to every machine he could reach, then informed others.
Is this felony unauthorized alteration and misdemeanor unauthorized access?
Does it risk too much accidental damage to tolerate?  Or is it a "clear and
present danger" that a professional is obligated to react to immediately?
What if the consultation happens first but the person consulted is
insufficiently concerned?

* Which approach is preferable for corporate computer security: having
rigid, rigorous procedures or having inquisitive (but skilled) personnel
roam freely in search of the unexpected?  One possible view of Schwartz's
actions and statements is that he was doing the latter as an inferred part
of his duties, but afterward being held up to the standards of the former.
The debate has involved fans of both approaches; surely the organization and
worker must agree on when to use which.

Some asked a speculative question specific to this case, without seeing the
court testimony: is a computer crime law being misapplied to punish
transgressions within a company, when the law's primary intent must have
been to deter and punish thieves, spies, and damagers?  If there is an
element of this present, is there now a greater personal risk of exposing
computer security risks in one's company?  If so, too many professionals
could infer that the lesson from this case is MYOB rather than CYA.

-- Steve Pacenka  NY State Water Resources Institute @ Cornell University
  [email protected]  ## Email to [email protected] for info about the
  Randal Schwartz computer security case.

------------------------------

Date: Fri, 18 Aug 1995 13:31:40 -0600
From: Paul E. Black <[email protected]>
Subject: Re: Stale accounts and lifestreams (Ewing, RISKS-17.27)

In Risks Digest 17.27 Martin Ewing ([email protected]) writes

 ... Dave Gelernter at Yale has developed a "lifestream" database model
 which would capture and organize all your electronic data, starting
 with your birth certificate.  ...

There is no clear distinction between relevant data (to be included) and
irrelevant data (to be excluded).  Instead there is a gradation from vital
information to utterly useless junk.  If one has a rare medical problem, it
may be very important to know the medical histories of parents and
relatives.  Thus any drugs my mother took are an important part of who I am,
so my "lifestream" begins even before my birth certificate.  If my house is
discovered to be in a geologically unstable area, repair records before I
bought the house may be important in deciding whether or not to take
preventive measures.  The history of those that did the repairs may be
important, too.  But I doubt I'll ever need to know who the lead drummer for
the Eagles was.

Attempting to have all of one's data in one place will not be a
breakthrough: either everything will be included, in which case storage is
problematic and finding pertinent information will be hard, or there some
items will be left out, in which case it must be tracked down in outside
repositories.

Paul E. Black  Laboratory for Applied Logic, Brigham Young University
Provo, Utah   84602  [email protected]  +1 801 378 8113  ([email protected])

------------------------------

Date: Mon, 21 Aug 95 14:02:54 -0400
From: "Rosenthal, Harlan" <[email protected]>
Subject: Re: Netscape security (Shank, RISKS-17.27)

One message emphasizes the break; another emphasizes how much work went into
it.  While not expecting absolute security, I feel that the second
underestimates the value of even a single credit-card number, and
overestimates the difficulty.  Optimize the cracking process, build
bit-slice hardware dedicated to the purpose, and the cost (and time) will
come down; flood out a few hundred orders on a stolen credit card, and if
even half of them get delivered you win.  Not to mention the inconvenience
to the true cardholder.

Just picture the fun of having some random hacker break your message (even
if he has to leave his computer on for a few weeks), order a few thousand
dollars worth of stuff on your credit card, and POST YOUR NUMBER on the
local BBSs.  You'll be cleaning up the paperwork for the next year.

No, thanks.  I get nervous enough giving my number to these people anyway,
having had a problem with a dishonest employee at a mail order house use my
[among many others] number improperly.  The last thing I want to do is ship
it over a medium which goes through an unknown number of other people's
systems on the way.

-Harlan Rosenthal

------------------------------

Date: Fri, 18 Aug 1995 14:16:25 -0700
From: [email protected] (Nevin ":-]" Liber)
Subject: Re: Netscape security (Shank, RISKS-17.27)

This type of cost analysis is only valid *if* the user of the computing
power has to make a tradeoff between using it for this purpose and other
useful work.    If these machines would otherwise be idle, this computing
power is virtually free (imagine if everyone ran RC4-40 cracking software
instead of screen savers...).  Also, how much cheaper does the computing
power get if you allow, say 30 days to crack a message?  How much cheaper
is the computing power going to be next year or the year after that
(assuming the data still retains its value; more on this below)?

How valuable are credit card numbers?  A reasonable assumption could be
the credit limit on the card.  My credit limit per card is certainly well
within the ballpark of the $10K cost you associate with cracking a
message, and I would guess that most non-students who have the equipment
to surf the Internet have a similar amount of credit available.

The other aspect to determining the level of security needed is the
duration that the information retains its value.  My primary credit card
has had the same number for the last five years, and I don't see it
changing in the foreseeable future, barring someone else "stealing" it.
This, combined with credit limits usually going up over time, makes this
data valuable *indefinitely*.

>   3. Inside the US, software can support a range of stronger encryption
>      options, including RC4-128, which is 2^88 times harder to break.

Irrelevant.  How many sites on the Internet are going to want to deal with
US-only transactions?

The other question to ask is who exactly is assuming the risk:  Netscape,
Visa, or consumers directly?

 Nevin ":-)" Liber       [email protected]    (520) 293-2799

------------------------------

Date: Fri, 18 Aug 1995 15:56:08 -0400 (EDT)
From: [email protected] (Phil Koopman x1624)
Subject: Re: Netscape security (Shank, RISKS 17.27)

In "Netscape Security" (RISKS 17.27) Peter Shank argues that RC4-40
is currently adequate for $10,000 dollars worth of security, because
that is approximately how much it costs to crack the encryption.

Unfortunately for that argument, compute power tends to get cheaper over
time.  Using an arm-waving rule of thumb of 2x more MIPS per dollar per
year, RC4-40 might be "trusted" for protection over time as follows:

 1995 $10,000
 1996  $5,000
 1997  $2,500
 1998  $1,250
 1999    $625

$5,000 credit limits are not uncommon.  So it seems that very soon
RC4-40 won't be secure enough to really trust with credit card numbers.

The RISK to me seems to be neglecting the relentless exponential growth in
compute power when doing encryption/security analyses.  I imagine the
costs of picking an inappropriate standard method and then changing it could
be significant.

Phil Koopman  United Technologies Research Center (UTRC)  411 Silver Lane
East Hartford, CT  06108   USA   [email protected]   (203) 727-1624

------------------------------

Date: 19 Aug 95 11:27:12 EDT
From: Bernard Gunther <[email protected]>
Subject: Re: Netscape security

  [Many TNX to all of you who submitted comments on the Netscape situation
  and its deeper issues.  Phil Koopman's point was noted by more
  respondents than I care to enumerate.  Bernard Gunther's message
  added another view as well.  PGN]

I have a friend whose purse was stolen and in two months about $25,000 worth
of fraud was committed just using the check book, credit cards and ID's.  If
a single decryption is a break-even proposition today, tomorrow it will be
cheap.  Clearly, RC4-40 is only temporarily good enough.  RC4-128 sounds
like a better bet.

Bernard Gunther

------------------------------

Date: Wed, 16 Aug 1995 00:17:41 -0400
From: [email protected] (Zygo Blaxell)
Subject: Re: "The Net" and "555-xxx" IP numbers (Bernstein, RISKS-17.26)

>> (and the IP equivalent of a 555-xxxx number is xx.xxx.345.xxx).
>Yeah. Unfortunately, typical IP software will silently convert 345 into
>89, which is a valid number. A better solution would be to allocate a
>set of IP addresses for use in movies. How about 43.43.xxx.xxx?

No...apart from 43 being a class A network (which means it's 43.xxx.xxx.xxx,
not 43.43.xxx.xxx), it already belongs to a NSP in Japan.  There is already
an IANA-sponsored test network at 192.0.2.xxx, which will work just fine as
long as the movie requires only one class C subnet.  ;-)

Zygo Blaxell, former sysadmin and current guru for the Univ. of Waterloo
Computer Science Club; current sysadmin for miranda.uwaterloo.ca, ezmail.com.

------------------------------

Date: 20 Aug 1995 15:24:10 +0200
From: [email protected] (Matthias Urlichs)
Subject: Re: "The Net" and "555-xxx" IP numbers (Bernstein, RISKS-17.26)

How about 10.anything?  That's defined as site-local, see RFC 1597.

Matthias Urlichs  Schleiermacherstrasse 12, 90491 Nuernberg (Germany) 42
[email protected]

------------------------------

Date: 20 Aug 1995 00:55:30 -0600
From: [email protected] (Colin Plumb)
Subject: Re: "The Net" and "555-xxx" IP numbers (Bernstein, RISKS-17.26)

How about 127.xxx.xxx.xxx?  You'd have a hard time hurting another system
using that.  Or, if you want to make it more obviously invalid, use 383, 639
or 895.x.x.x .  Of course, to be really futuristic, we need the IPv6
equivalent...  Colin

------------------------------

Date: 11 August 1995 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED Info on RISKS (comp.risks) [See other issues for full info]

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use their
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
DIRECT REQUESTS to <[email protected]> (majordomo) with one-line
  SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks
  Individual issues can be accessed using a URL of the form
  http://catless.ncl.ac.uk/Risks/VL.IS.html  [...]

RISKS ARCHIVES: "ftp unix.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]

------------------------------

End of RISKS-FORUM Digest 17.28
************************