Subject: RISKS DIGEST 17.68

RISKS-LIST: Risks-Forum Digest  Tuesday 30 January 1996  Volume 17 : Issue 68

  FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

***** See last item for further information, disclaimers, etc.       *****

 Contents:
Extremely undesirable errors (Jordin Kare)
20 bits, 40 bits, 60 bits, $10 (A. Padgett Peterson)
Re: Cost to crack Netscape Security falls... (Larry Kilgallen,
 Peter Kaiser, Peter Curran)
Simulation vs. Live Action (Japanese plane incident) (John Bredehoft)
Re: Unintended missile launches (Thomas L Martin, Robin Kenny)
Re: Security hole in SSH 1.2.0 (Bear R Giles, Mike Alexander)
Re: WebCard Visa: It's everywhere you (don't) want to be? (Li Gong)
Computer Control and Human Error (Kletz+Chung+Broomfield+Shen-Orr)(C. Shen-Orr)
Re: "Learning Machines" and "Learning People" (Michael J Zehr)
Legacy security holes (Ted Russ)
ABRIDGED info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Fri, 26 Jan 96 14:27:06 PST
From: Jordin Kare <[email protected]>
Subject: Extremely undesirable errors

The recent discussion of modems hanging up on "+++" made me think of the
following:

 Q:  What is the worst error message the pilot of an F-14 flying over
     the Persian Gulf can receive?

 A:  NO CARRIER

Jordin Kare

   [Cute.  I would also hate to see a sequence of messages on a fly-by-wire
   cockpit screen something like this:
     "Garbage collection.
      Undefined error at location 75421.
      Dumped core.
      Rebooting, please wait.
      Flight-data loss likely."  PGN]

------------------------------

Date: Thu, 25 Jan 96 22:22:43 -0500
From: [email protected] (A. Padgett Peterson)
Subject: 20 bits, 40 bits, 60 bits, $10

Subject: Single computer breaks 40-bit RC4 in under 8 days (Kocher, 17.67)

>On a typical PC, there are just too many other security weaknesses
>for there to be much practical difference between 3DES and 40-bit RC4.

Paul and I have different worldviews and this is one example. "other
security weaknesses usually revolve around theft, compromise, and "buying an
employee" - the last is often the cheapest (social engineering) but is
traceable. The nice thing about brute force is that it is essentially
passive, the target never needs to have a clue that it happened.

Given that, I apply a quanta rule to crypto, these days I use DES as
"strong enough" since SOA to crack 56 bits is measures in years, not hours.

OTOH, with the right equipment, RC4-40 will usually succumb in either
3 hours of $500 whichever comes first, down in the noise as far as cost
and wholly passive ("other" methods are active or at least local). So
clearly 40 bits is not "Good Enough"(C) while at 56, other methods of
attack are easier. The real economic cross-over is somewhere in the middle:
probably around a week or 46 bits/$30,000 but with 56 easy/available, why
bother with less (would cost *more*).

So while it is possible that in theory, there is little difference between
3DES and RC4-40, in the real world of cost/reward, I suspect that there
is a gap that would take many doubloons to cross.

Padgett

ps personally, I like 64 bits. Is a chunk that fits nicely into 32 bit
registers. Examined from a CISC cycle cost perspective, why bother with less?

------------------------------

Date: Fri, 26 Jan 1996 10:37:13 -0500 (EST)
From: "Larry Kilgallen, LJK Software" <[email protected]>
Subject: Re: Cost to crack Netscape Security falls... (Curran, RISKS-17.67)

       There is a simple solution to the ITAR problem - develop the
       software in a location not subject to US export laws (i.e. almost
       anywhere else in the world).

While that may be a viable approach for companies in those other countries,
it is not relevant to Netscape.  As a US corporation, Netscape would be most
likely prosecuted under ITAR if they had such software developed by a
subsidiary or a contractor in some other country, even if the results never
touched US territory.  This government attitude even extends to not allowing
US companies to add convenient hooks for the provision of crypto once a
product is delivered overseas.

While the USA has no monopoly on software development expertise, it is _all_
of the software which must be developed elsewhere, not just the crypto part.
Since Netscape has such a hold on the browser market it would appear that
the US _does_ have a monopoly on "something" (marketing?) for such market
penetration despite ITAR limitations, or perhaps the old adage is true
"Security does not sell products".

Larry Kilgallen

------------------------------

Date: Fri, 26 Jan 96 08:20:11 MET
From: Peter Kaiser <[email protected]>
Subject: Re: Cost to crack Netscape Security falls... (Curran, RISKS-17.67)

Not so simple.  It must be developed not only outside the USA, but by a
non-US company and in a locale that permits unrestricted export of the
product.

Under those conditions it's more difficult for the that company's profit to
accrue to Netscape (or whomever), because the USA authorities pay attention
to whether there's a real separation of entities.  For instance, my company
is incorporated separately in most countries, but because we have a US
parent we are everywhere considered a US corporation for these purposes;
and in my experience, the authorities do pay attention to this.  So there's
nowhere on earth that we can develop crypto for unrestricted export.

See Bert-Jaap Koops's "Crypto Law Survey".

Pete [email protected]  +33 92.95.62.97  FAX +33 92.95.50.50

------------------------------

Date: Fri, 26 Jan 1996 11:55:14 -0500
From: Peter Curran <[email protected]>
Subject: Re: Cost to crack Netscape Security falls... (Kaiser, RISKS-17.78)

I am certainly not an expert on this topic, but...

This view is, of course, based on the American view of the situation.  Most
other countries find this attempt at extra-territorial application of US law
to be extremely offensive, and illegal.  A company incorporated in any
country is subject to the laws of that country, not US law, regardless of
the ownership.  US laws claiming otherwise are generally regarded as null
and void outside the US.  In Canada, for example, I believe we have a law
making it explicitly illegal for subsidiaries in Canada to obey US law when
it conflicts with Canadian law.  This normally arises in relation to the US
embargo of Cuba, but it applies in other areas.

Of course, in reality this can simply create an unresolvable conflict for
the companies - subject to one penalty for acting, and another for not.
However, in regard to the ITAR rules, I think it is pretty clear that a
company could contract out the development of a piece of software to a
offshore company, and the result would not be subject to US laws.  Obviously
there are complications involved in doing this, but if a company is
seriously trying maintain a global leadership role in data communications
services, and seriously trying to provide acceptable security to its
customers, something along this line is necessary.

Peter Curran  [email protected]

------------------------------

Date: Fri, 26 Jan 96 15:26:26 EST
From: Thomas L Martin <[email protected]>
Subject: Re: Unintended missile launches (Shafer, RISKS-17.67)

This sounds like an incident a high school teacher of mine talked about
several times.  During Vietnam, on the aircraft carrier U.S.S. Forrestal,
someone forgot to raise the heat shields behind a departing jet.  When the
jet fired up its engines to take off, a missile locked onto the heat and
launched.  (I don't remember whether this missile was on another jet or
still being loaded.) The initial explosion led to several other missiles
going off and a huge fire.  My high school teacher worked in the psych ward
of a Navy hospital on Guam, and treated a number of sailors involved.

Sorry I'm so sketchy--it's been a number of years since I heard the story.

Tom Martin, Carnegie Mellon University, 5000 Forbes Ave., Hamburg Hall/EDRC,
Pittsburgh, PA 15213    (412) 268 6480   [email protected]

------------------------------

Date: Tue, 30 Jan 96 17:30:06 EDT
From: Robin Kenny <[email protected]>
Subject: Re: Unintended missile launches

In RISKS-17.67 Mary Shafer commented on unintentional missile launches being
common; an ex-RAAF technician once related to me how he was in a tower at an
airfield and two SideWinder missiles loosed and shot either side of it while
he was on top... he believes the tower being wood saved him.  I wish I could
have seen this, and heard him afterwards, as he was quite an articulate
gentleman.

[email protected]  Melbourne, Australia UTC +10 hours

------------------------------

Date: Wed, 24 Jan 1996 13:48:38 +0800
From: [email protected] (John Bredehoft)
Subject: Simulation vs. Live Action (Japanese plane incident)(Ishikawa, 17.65)

In RISKS-17.65, Chiaki Ishikawa described an incident in which a Japanese
air force fighter plane, armed with live missiles, shot down another plane
during a training exercise. According to the information provided, a message
appeared on the firearm status screen which should only appear when the
master firearm switch is activated. Some officials wondered why the pilot
didn't realize that something was wrong when this particular message
appeared.

This portion of the story raises an interesting question: if the plane had
been functioning "correctly," then a training exercise (in which the
"missiles activated" message would NOT appear) would differ from a live
combat situation (in which this message WOULD appear).

I was wondering if this difference between training and live action would be
a risk in itself (how would a pilot in combat react to a message that had
never appeared during training?).

Or would the "fix" (i.e., displaying a "missiles activated" message when no
missiles are actually activated) be riskier?

I'm sure that RISKS readers have faced simulation vs. actual issues before;
I'd be interested in your thoughts.

John E. Bredehoft, Proposals Department, Printrak International Inc.
[email protected]  [email protected]

------------------------------

Date: Fri, 26 Jan 1996 19:00:26 -0700 (MST)
From: [email protected] (Bear R Giles)
Subject: Re: Security hole in SSH 1.2.0 (Alexander, RISKS-17.67)

>... Unix security bugs that result from the fact that Unix associates all
>access controls with users and has no way to assign access rights to a
>program independent of the user running the program.

That's not true at all.  Unix can do this, it just requires a bit of work.

1. Create an /etc/passwd entry for the program.  You can set the
  password and shell fields so that logins are impossible.

2. Change the ownership of the program to this ID, and set the program's
  "set uid" bit.

I know this works because I have a slew of realtime ingest programs
which all run as a non-existent user.  The human users get access to
the data through a shared "group" id... but we have only read access
to most of that data.

It becomes more complex if your program requires root permissions, but
I know that NCSA httpd is setuid to root but generally runs as a specified
user id -- and it's highly recommended that this user-id be for a nonexistent
user with no privileges.  For many systems, that's nobody (-1) and
nogroup (-1).  It only uses the special privileges to grab a privileged
socket number.

>and access to files and other system resources can be granted to the
>program (or a combination of a program and a user)

You could fake the latter in Unix, with some work.  You would need a
"group" for each (user, program) pair, and a small program with root
permissions.  After it determined the user-id of the person who invoked
the program it could change the user and group id to "nobody" and
"user-program" (in a manner to permanently lose root permissions!)
and then exec() the actual program.

Does Unix require too much work to achieve this goal?  That's a hard
question to answer; allowing a more direct way to achieve these objectives
also increases the risk of undetected errors in the privilege logic.
There's a lot to be said for handling one case well instead of spreading
your effort over three or four different approaches.

Bear Giles  [email protected]

------------------------------

Date: Fri, 26 Jan 1996 12:24:13 -0800 (PST)
From: Li Gong <[email protected]>
Subject: Re: WebCard Visa: It's everywhere you (don't) want to be?

In RISKS-17.67, Doug Claar <[email protected]> mentioned the
"WebCard Visa", which will allow users to access their account statements
via Internet.  Wells Fargo Bank, another major bank in the SF Bay Area, has
started offering online access to account information, not through the old
dia-up, but via the Internet.  From the marketing brochure, it is unclear
whether the bank would or could put my records separate from those who have
signed up to use the service.  If not, my records would be exposed to the
net whether I want it or not, and will be vulnerable to holes/bugs in web
software.

Li GONG, SRI International, URL http://www.csl.sri.com/~gong/

------------------------------

Date: Sat, 27 Jan 1996 00:32:48 -0500
From: [email protected] (Mike Alexander)
Subject: Re: Security hole in SSH 1.2.0 (Giles, RISKS-17.68)

Sure that works, but those users are non-existent only in your mind, not in
Unix's view.  If there is a bug in one of the programs that uses this
technique that lets a person get control of the process while it is running,
then that person is using that userID whether or not you think the user
exists.  Of course the person will only have whatever access the supposedly
non-existent user has, which is better than root access, but still....  The
advantage of program keys in this context is that if the program crashes,
the user doesn't inherit the access the program had, only whatever access
they are supposed to have given the userID they are using (and whatever
other program they happen to run).  This is quite different.

It's certainly possible to write secure software in Unix, but it's not
trivial.  Too much depends on each program (sendmail, ftpd, popd, etc.,
etc.) being well written and well debugged.  Putting this facility in the
kernel and doing it well makes it much, much easier to write secure server
programs.  People with only limited programming ability have written
server-style programs for MTS which have been at least as secure as the
typical Unix server -- even in Fortran although I wouldn't recommend that.
:-)

Please don't take me wrong and start yet another operating system war.
Unix is fine, but it doesn't necessarily have the only answer to all the
world's problems.  The same could be said for many operating systems,
certainly for MTS.  I'm just trying to point out that there is more than
one way to solve this problem, and looking at other ways may teach us
something.

Mike Alexander 1309 Gardner, Ann Arbor, MI, USA University of Michigan
+1-313-663-1759  [email protected]  America Online: MAlexander

------------------------------

Date: Sun, 28 Jan 1996 00:27:55 +0200 (EET)
From: "C. Shen-Orr" <[email protected]>
Subject: Computer Control and Human Error (Kletz +Chung+Broomfield+Shen-Orr)

 Computer Control and Human Error, by Trevor Kletz, with Paul Chung, Eamon
 Broomfield and Chaim Shen-Orr.  Institution of Chemical Engineers (IchemE)
 (UK) - ISBN 0 85295 362 3, Gulf Publishing (US) - ISBN 0 88415 269 3.
 July 1995.

Back-cover blurb: Whether you design computer-controlled systems, work with
them or merely write the software for them, this is the book for you.  It
presents computers and error in everyday language, making them accessible to
non-specialists and specialists alike.  Trevor Kletz takes you through a
range of incidents and accidents which occurred on computer-controlled
plants and equipment.  He is a master at bringing out the lessons.
Researchers Paul Chung and Eamon Broomfield survey experience to date with
hazard and operability studies (Hazops) on computers - an attempt to spot
and correct errors before they threaten the safety of the controlled
systems.  And Chaim Shen-Orr, who directs safety programmes for RAFAEL in
Israel, explains why computer-controlled systems can fail.  He adds advice
for system designers and managers designing or maintaining safety-sensitive
installations.  Human error is generally to blame - either in interaction
with the computer or in writing the software which it uses.  A compelling
read!

C. Shen-Orr, Sc.D., 17 Lionel Watson St., Haifa, Israel 34751
Tel: +972 (0)4 834-4831  Fax: +972 (0)4 834-4831   [email protected]

------------------------------

Date: Mon, 29 Jan 96 17:29:13 -0500
From: [email protected]
Subject: Re: "Learning Machines" and "Learning People"

.. or "When is a spam a real risk?"

When I first read the message, I wondered why it was included in RISKS.
Then I thought about the kind of risk it would be if someone actually
*did* develop a system as described.

Suppose the device works, and it can program your mind to say "Me llamo
Miguel" when you're introducing yourself to someone speaking Spanish.
But what if there were a transmission error while you were connected to
the machine and it actually programmed your brain to stop your heart
when you sneezed!

I'd be more leery of using such a device if I thought it worked than if
I was sure it didn't. :)

-michael j zehr

 [That is sort of why it got through my sieve in the first place.  TNX.  PGN]

------------------------------

Date: Tue, 30 Jan 1996 00:20:45 -0800
From: Ted Russ <[email protected]>
Subject: Legacy security holes.

Here is something which will make some people shudder, and others break out
in a cold sweat - a transcription (with all identifiable references changed
to protect the parties involved) of a memo which I have been given:

(A brief note first: This company has just recently consolidated all
their various systems, and incorporated a WAN connection to an affiliated
company which is interstate, and whose WAN is connected to Internet via a
firewall.)

  [This is a very old problem.  However, because of the feeding frenzy
  of everyone getting on the Internet, it seems worthy of inclusion.  PGN]

 - --------------------------------------------------------
Brief Description:
On Monday 29th, I discovered a security hole in the new Computer Services
system, and made the sysadm aware of it, then helped him to plug it.  The
potential for almost unlimited access to the system was there, and as
shall be seen, there's probably no way to find out if someone hasn't
already taken advantage of the insecurity.

Details:
I used a quite legitimate program which was installed on my PC by
Computer Services as part of the new networking packages.  It allowed me,
under the sysadm's supervision, to copy a file to my PC from the IBM
machine.

The program is called FTP (File Transfer Protocol,) and is commonly used
on Internet to move files around any TCP/IP network, such as we are
running.  Most people who have used Internet would be familiar with it's
operation.

FTP requires a server program, called the FTP daemon, to run on the host
machine.  Such an FTP daemon was running on the IBM, and it was by
de-activating this daemon that the sysadm and I subsequently closed the
security breach.

FTP daemons can be set to log all transactions, and to restrict login
access; however due to there never having been a requirement for such
measures before, the daemon on the IBM machine had not been set to
logging mode, nor restricted in any way.

Implications:
Since the security of the IBM from network programs has never been a
problem, it was possible to read almost every file on the IBM system, and
probably write to or overwrite about half of those.

Since FTP is a well-known program, and since it has been installed on
every PC which is connected to our network, around 20% of personnel on
the premises could have exploited this weakness to copy sensitive data to
their PC, and then read it or copy it to a floppy.

Since our TCP/IP network is also connected to XXXX's WAN, anyone on that
system could also have exploited this weakness in a similar fashion.
Copies of sensitive material could have ended in the wrong hands, details
could have been changed and then written back onto the IBM, etc.

It is necessary to have a username and password in order to successfully
log onto an FTP server, however there are several commonly-used account
names such as "sysadm" which most users (and certainly ALL would-be
system breakers) would be familiar with, and which would then only
require the password to be found out, in order to gain access.

One of the first targets for system breakers then becomes the password
file, which they could, once they have copied it, decrypt at their
leisure and thus obtain the root password.

And since the FTP server on the IBM was not logging operations, it would
not be immediately obvious that someone had been logging on and trying
random passwords in their tens of thousands in an effort to break into
the system.

Lastly, because the server was not logging, it is very possible that
someone has already exploited this weakness and we would have no trace of
their activities....

Other Thoughts:
The IBM and Novell systems have been linked together, and linked to the
Network LAN, for around a month, I believe.  It took me less than three
minutes to realise that here was a potential system risk, find out the
necessary IP address, and prove that there was a risk.

Since I run a similar but much more exposed system, I adhere to an
accepted code of conduct, and informed the sysadm immediately on
discovering the breach.  Another user, perhaps one dissatisfied with the
Company or tempted by offers and promises of an outside party, might not
have been so considerate of the Company's interests . . .

 [Luckily, this case ended with a general tightening of security and no
 damage. (To date, anyhow...)  Ted Russ]

------------------------------

Date: 11 January 1996 (LAST-MODIFIED)
From: [email protected]
Subject: ABRIDGED info on RISKS (comp.risks)

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent) on
your system, if possible and convenient for you.  BITNET folks may use a
LISTSERV (e.g., LISTSERV@UGA): SUBSCRIBE RISKS or UNSUBSCRIBE RISKS.  [...]
DIRECT REQUESTS to <[email protected]> (majordomo) with one-line,
  SUBSCRIBE (or UNSUBSCRIBE) [with net address if different from FROM:]
  INFO     [for further information]

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, nonrepetitious, and without caveats
on distribution.  Diversity is welcome, but not personal attacks.  [...]
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
By submitting an item that is accepted for publication in RISKS, the author
grants permission for unlimited noncommercial public distribution and
redistribution in electronic and print form.  Relevant contributions may
appear in the RISKS section of regular issues of ACM SIGSOFT Software
Engineering Notes or SIGSAC Review.

RISKS can also be read on the web at URL http://catless.ncl.ac.uk/Risks

RISKS ARCHIVES: "ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>
cd risks<CR> or cwd risks<CR>, depending on your particular FTP.  [...]
[Back issues are in the subdirectory corresponding to the volume number.]
  Individual issues can be accessed using a URL of the form
    http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue]
    ftp://unix.sri.com/risks  [if your browser accepts URLs.]

------------------------------

End of RISKS-FORUM Digest 17.68
************************