RISKS-LIST: RISKS-FORUM Digest, Summary of messages on UNIX breakins.
THIS FILE IS AVAILABLE FOR FTP FROM CSL.SRI.COM <RISKS>RISKS.REID.

------------------------------------------------------------------------

From: [email protected] (Brian Reid)
Date: 16 Sep 1986 1519-PDT (Tuesday)
To: Peter G. Neumann <[email protected]>   [FOR RISKS]
Subject: Massive UNIX breakins at Stanford
RISKS-LIST: RISKS-FORUM Digest, Tuesday, 16 September 1986  Volume 3 : Issue 56

   Lessons learned from a recent rash of Unix computer breakins

Introduction
  A number of Unix computers in the San Francisco area have
  recently been plagued with breakins by reasonably talented
  intruders. An analysis of the breakins (verified by a telephone
  conversation with the intruders!) show that the networking
  philosophy offered by Berkeley Unix, combined with the human
  nature of systems programmers, creates an environment in which
  breakins are more likely, and in which the consequences of
  breakins are more dire than they need to be.

  People who study the physical security of buildings and military
  bases believe that human frailty is much more likely than
  technology to be at fault when physical breakins occur. It is
  often easier to make friends with the guard, or to notice that he
  likes to watch the Benny Hill show on TV and then wait for that
  show to come on, than to try to climb fences or outwit burglar
  alarms.

Summary of Berkeley Unix networking mechanism:

  The user-level networking features are built around the
  principles of "remote execution" and "trusted host". For example,
  if you want to copy a file from computer A to computer B, you
  type the command
          rcp A:file B:file
  If you want to copy the file /tmp/xyz from the computer that you
  are now using over to computer C where it will be called
  /usr/spool/breakin, you type the command
          rcp /tmp/xyz C:/usr/spool/breakin
  The decision of whether or not to permit these copy commands is
  based on "permission" files that are stored on computers A, B,
  and C. The first command to copy from A to B will only work if
  you have an account on both of those computers, and the
  permission file stored in your directory on both of those
  computers authorizes this kind of remote access.

  Each "permission file" contains a list of computer names and user
  login names. If the line "score.stanford.edu reid" is in the
  permission file on computer "B", it means that user "reid" on
  computer "score.stanford.edu" is permitted to perform remote
  operations such as rcp, in or out, with the same access
  privileges that user "reid" has on computer B.

How the breakins happened.

  One of the Stanford campus computers, used primarily as a mail
  gateway between Unix and IBM computers on campus, had a guest
  account with user id "guest" and password "guest". The intruder
  somehow got his hands on this account and guessed the password.
  There are a number of well-known security holes in early releases
  of Berkeley Unix, many of which are fixed in later releases.
  Because this computer is used as a mail gateway, there was no
  particular incentive to keep it constantly up to date with the
  latest and greatest system release, so it was running an older version
  of the system. The intruder instantly cracked "root" on that
  computer, using the age-old trojan horse trick. (He had noticed
  that the guest account happened to have write permission into a
  certain scratch directory, and he had noticed that under certain
  circumstances, privileged jobs could be tricked into executing
  versions of programs out of that scratch directory instead of out
  of the normal system directories).

  Once the intruder cracked "root" on this computer, he was able to
  assume the login identity of everybody who had an account on that
  computer. In particular, he was able to pretend to be user "x" or
  user "y", and in that guise ask for a remote login on other
  computers. Sooner or later he found a [user,remote-computer] pair
  for which there was a permission file on the other end granting
  access, and now he was logged on to another computer. Using the
  same kind of trojan horse tricks, he was able to break into root
  on the new computer, and repeat the process iteratively.

  In most cases the intruder left trojan-horse traps behind on
  every computer that he broke into, and in most cases he created
  login accounts for himself on the computers that he broke into.
  Because no records were kept, it is difficult to tell exactly how
  many machines were penetrated, but the number could be as high as
  30 to 60 on the Stanford campus alone. An intruder using a
  similar modus operandi has been reported at other installations.

How "human nature" contributed to the problem

  The three technological entry points that made this intrusion
  possible were:

     * The large number of permission files, with entirely
       too many permissions stored in them, found all over the campus
       computers (and, for that matter, all over the ARPAnet).

     * The presence of system directories in which users have write
       permission.

     * Very sloppy and undisciplined use of search paths in privileged
       programs and superuser shell scripts.


Permissions: Berkeley networking mechanism encourages carelessness.

  The Berkeley networking mechanism is very very convenient. I use
  it all the time. You want to move a file from one place to
  another? just type "rcp" and it's there. Very fast and very
  efficient, and quite transparent. But sometimes I need to move a
  file to a machine that I don't normally use. I'll log on to that
  machine, quickly create a temporary permission file that lets me
  copy a file to that machine, then break back to my source machine
  and type the copy command. However, until I'm quite certain that
  I am done moving files, I don't want to delete my permission file
  from the remote end or edit that entry out of it. Most of us use
  display editors, and oftentimes these file copies are made to
  remote machines on which the display editors don't always work
  quite the way we want them to, so there is a large nuisance
  factor in running the text editor on the remote end. Therefore
  the effort in removing one entry from a permission file--by
  running the text editor and editing it out--is high enough that
  people don't do it as often as they should. And they don't want
  to *delete* the permission file, because it contains other
  entries that are still valid. So, more often than not, the
  permission files on rarely-used remote computers end up with
  extraneous permissions in them that were installed for a
  one-time-only operation. Since the Berkeley networking commands
  have no means of prompting for a password or asking for the name
  of a temporary permission file, everybody just edits things into
  the permanent permission file. And then, of course, they forget
  to take it out when they are done.


Write permission in system directories permits trojan horse attacks.

  All software development is always behind schedule, and
  programmers are forever looking for ways to do things faster. One
  convenient trick for reducing the pain of releasing new versions
  of some program is to have a directory such as /usr/local/bin or
  /usr/stanford/bin or /usr/new in which new or locally-written
  versions of programs are kept, and asking users to put that
  directory on their search paths. The systems programmers then
  give themselves write access to that directory, so that they can
  intall a new version just by typing "make install" rather than
  taking some longer path involving root permissions. Furthermore,
  it somehow seems more secure to be able to install new software
  without typing the root password. Therefore it is a
  nearly-universal practice on computers used by programmers to
  have program directories in which the development programmers
  have write permission. However, if a user has write permission in
  a system directory, and if an intruder breaks into that user's
  account, then the intruder can trivially break into root by using
  that write permission to install a trojan horse.

Search paths: people usually let convenience dominate caution.

  Search paths are almost universally misused. For example, many
  people write shell scripts that do not specify an explicit search
  path, which makes them vulnerable to inheriting the wrong path.
  Many people modify the root search path so that it will be
  convenient for systems programmers to use interactively as the
  superuser, forgetting that the same search path will be used by
  system maintenance scripts run automatically during the night.
  It is so difficult to debug failures that are caused by incorrect
  search paths in automatically-run scripts that a common "repair"
  technique is to put every conceivable directory into the search
  path of automatically-run scripts. Essentially every Unix
  computer I have ever explored has grievous security leaks caused
  by underspecified or overlong search paths for privileged users.

Summary conclusion: Wizards cause leaks

  The people who are most likely to be the cause of leaks are
  the wizards. When something goes wrong on a remote machine, often
  a call goes in to a wizard for help. The wizard is usually busy
  or in a hurry, and he often is sloppier than he should be with
  operations on the remote machine. The people who are most likely
  to have permission files left behind on stray remote machines are
  the wizards who once offered help on that machine. But, alas,
  these same wizards are the people who are most likely to have
  write access to system directories on their home machines,
  because it seems to be in the nature of wizards to want to
  collect as many permissions as possible for their accounts. Maybe
  that's how they establish what level of wizard that they are. The
  net result is that there is an abnormally high probability that
  when an errant permission file is abused by an intruder, that it
  will lead to the account of somebody who has an unusually large
  collection of permissions on his own machine, thereby making it
  easier to break into root on that machine.

Conclusions.

  My conclusions from all this are these:
     * Nobody, no matter how important, should have write permission
       into any directory on the system search path. Ever.

     * Somebody should carefully re-think the user interface of the
       Berkeley networking mechanisms, to find ways to permit people to
       type passwords as they are needed, rather than requiring them to
       edit new permissions into their permissions files.

     * The "permission file" security access mechanism seems
       fundamentally vulnerable. It would be quite reasonable
       for a system manager to forbid the use of them, or to
       drastically limit the use of them. Mechanized checking is
       easy.

     * Programmer convenience is the antithesis of security, because
       it is going to become intruder convenience if the programmer's
       account is ever compromised. This is especially true in
       setting up the search path for the superuser.



Lament
  I mentioned in the introduction that we had talked to the
  intruders on the telephone. To me the most maddening thing about
  this intrusion was not that it happened, but that we were unable
  to convince any authorities that it was a serious problem, and
  could not get the telephone calls traced. At one point an
  intruder spent 2 hours talking on the telephone with a Stanford
  system manager, bragging about how he had done it, but there was
  no way that the call could be traced to locate him. A few days
  later, I sat there and watched the intruder log on to one
  Stanford comptuer, and I watched every keystroke that he typed on
  his keyboard, and I watched him break in to new directories, but
  there was nothing that I could do to catch him because he was
  coming in over the telephone. Naturally as soon as he started to
  do anything untoward I blasted the account that he was using and
  logged him off, but sooner or later new intruders will come
  along, knowing that they will not be caught because what they are
  doing is not considered serious. It isn't necessarily serious,
  but it could be. I don't want to throw such people in jail,
  and I don't want to let them get away either. I just want to
  catch them and shout at them and tell them that they are being
  antisocial.

Brian Reid
DEC Western Research and Stanford University

------------------------------

From: [email protected] (Dave Curry)
To: [email protected]
Cc: [email protected]
Subject: Massive UNIX breakins
Date: Wed, 17 Sep 86 08:01:03 EST
RISKS-LIST: RISKS-FORUM Digest,  Wednesday, 17 Sept 1986  Volume 3 : Issue 58

Brian -

   I feel for you, I really do.  Breakins can be a real pain in the
neck, aside from being potentially hazardous to your systems.  And, we
too have had trouble convincing the authorities that anything serious
is going on.  (To their credit, they have learned a lot and are much
more responsive now than they were a few years ago.)

   I do have a couple of comments though.  Griping about the Berkeley
networking utilities is well and good, and yes, they do have their
problems.  However, I think it really had little to do with the
initial breakins on your system.  It merely compounded an already
exisiting breakin several fold.

   Two specific parts of your letter I take exception to:

       One of the Stanford campus computers, used primarily as a mail
       gateway between Unix and IBM computers on campus, had a guest
       account with user id "guest" and password "guest". The intruder
       somehow got his hands on this account and guessed the
       password.

   Um, to put it mildly, you were asking for it.  "guest" is probably
the second or third login name I'd guess if I were trying to break
in.  It ranks right up there with "user", "sys", "admin", and so on.
And making the password to "guest" be "guest" is like leaving the
front door wide open.  Berkeley networking had nothing to do with your
initial breakin, leaving an obvious account with an even more obvious
password on your system was the cause of that.

       There are a number of well-known security holes in early
       releases of Berkeley Unix, many of which are fixed in later
       releases.  Because this computer is used as a mail gateway,
       there was no particular incentive to keep it constantly up to
       date with the latest and greatest system release, so it was
       running an older version of the system.

   Once again, you asked for it.  If you don't plug the holes, someone
will come along and use them.  Again Berkeley networking had nothing to
do with your intruder getting root on your system, that was due purely
to neglect.  Granted, once you're a super-user, the Berkeley networking
scheme enables you to invade many, many accounts on many, many machines.

   Don't get me wrong.  I'm not trying to criticize for the sake of
being nasty here, but rather I'm emphasizing the need for enforcing
other good security measures:

       1. Unless there's a particularly good reason to have one, take
          all "generic" guest accounts off your system.  Why let
          someone log in without identifying himself?

       2. NEVER put an obvious password on a "standard" account.
          This includes "guest" on the guest account, "system" on the
          root account, and so on.

          Enforcing this among the users is harder, but not
          impossible.  We have in the past checked all the accounts
          on our machines for stupid passwords, and informed everyone
          whose password we found that they should change it.  As a
          measure of how simple easy passwords make things, we
          "cracked" about 400 accounts out of 10,000 in one overnight
          run of the program, trying about 12 passwords per account.
          Think what we could have done with a sophisticated attack.

       3. FIX SECURITY HOLES.  Even on "unused" machines.  It's amazing
          how many UNIX sites have holes wide open that were plugged
          years ago.  I even found a site still running with the 4.2
          distributed sendmail a few months ago...

       4. Educate your police and other authorities about what's going
          on.  Invite them to come learn about the computer.  Give
          them an account and some documentation.  The first time we
          had a breakin over dialup (1982 or so), it took us three
          days to convince the police department that we needed the
          calls traced.  Now, they understand what's going on, and
          are much quicker to respond to any security violations we
          deem important enough to bring to their attention.  The
          Dean of Students office is now much more interested in
          handling cases of students breaking in to other students'
          accounts; several years ago their reaction was "so what?".
          This is due primarily to our people making an effort to
          educate them, although I'm sure the increased attention
          computer security has received in the media (the 414's, and
          so on) has had an effect too.

--Dave Curry
Purdue University
Engineering Computer Network

------------------------------

From: [email protected] (Brian Reid)
Date: 17 Sep 1986 0729-PDT (Wednesday)
To: [email protected] (Dave Curry)
Cc: [email protected]
Subject: Massive UNIX breakins
RISKS-LIST: RISKS-FORUM Digest,  Wednesday, 17 Sept 1986  Volume 3 : Issue 58

The machine on which the initial breakin occurred was one that I didn't
even know existed, and over which no CS department person had any
control at all. The issue here is that a small leak on some
inconsequential machine in the dark corners of campus was allowed to
spread to other machines because of the networking code. Security is
quite good on CSD and EE machines, because they are run by folks who
understand security. But, as this episode showed, that wasn't quite good
enough.

------------------------------

Date: Thu, 18 Sep 86 09:12:59 cdt
From: "Scott E. Preece" <preece%[email protected]>
To: [email protected]
Subject: Re: Massive UNIX breakins at Stanford
RISKS-LIST: RISKS-FORUM Digest, Saturday, 20 September 1986 Volume 3 : Issue 60

> From: [email protected] (Brian Reid) The machine on which the initial
> breakin occurred was one that I didn't even know existed, and over
> which no CS department person had any control at all. The issue here is
> that a small leak on some inconsequential machine in the dark corners
> of campus was allowed to spread to other machines because of the
> networking code. Security is quite good on CSD and EE machines, because
> they are run by folks who understand security. But, as this episode
> showed, that wasn't quite good enough.
----------

No you're still blaming the networking code for something it's not supposed
to do.  The fault lies in allowing an uncontrolled machine to have full
access to the network.  The NCSC approach to networking has been just that:
you can't certify networking code as secure, you can only certify a network
of machines AS A SINGLE SYSTEM.  That's pretty much the approach of the
Berkeley code, with some grafted on protections because there are real-world
situations where you have to have some less-controlled machines with
restricted access.  The addition of NFS makes the single-system model even
more necessary.

scott preece, gould/csd - urbana, uucp: ihnp4!uiucdcs!ccvaxa!preece

------------------------------

Date: Mon, 22 Sep 86 11:04:16 EDT
To: RISKS FORUM    (Peter G. Neumann -- Coordinator) <[email protected]>
Subject: Massive UNIX breakins at Stanford
From: Jerome H. Saltzer <[email protected]>
RISKS-LIST: RISKS-FORUM Digest,  Monday, 22 September 1986  Volume 3 : Issue 62

In RISKS-3.58, Dave Curry gently chastises Brian Reid:

> . . . you asked for it. . . Berkeley networking had nothing to
> do with your intruder getting root on your system, that was due purely
> to neglect.  Granted, once you're a super-user, the Berkeley networking
> scheme enables you to invade many, many accounts on many, many machines.

And in RISKS-3.59, Scott Preece picks up the same theme, suggesting that
Stanford failed by not looking at the problem as one of network security,
and, in the light of use of Berkeley software, not enforcing a no-attachment
rule for machines that don't batten down the hatches.

These two technically- and policy-based responses might be more tenable if
the problem had occurred at a military base.  But a university is a
different environment, and those differences shed some light on environments
that will soon begin to emerge in typical commercial and networked home
computing settings.  And even on military bases.

There are two characteristics of the Stanford situation that
RISK-observers should keep in mind:

    1.  Choice of operating system software is made on many factors,
not just the quality of the network security features.  A university
has a lot of reasons for choosing BSD 4.2.  Having made that choice,
the Berkeley network code, complete with its casual approach to
network security, usually follows because the cost of changing it is
high and, as Brian noted, its convenience is also high.

    2.  It is the nature of a university to allow individuals to do
their own thing.  So insisting that every machine attached to a
network must run a certifably secure-from-penetration configuration
is counter-strategic.  And on a campus where there may be 2000
privately administered Sun III's, MicroVAX-II's, and PC RT's all
running BSD 4.2, it is so impractical as to be amusing to hear it
proposed.  Even the military sites are going to discover soon that
configuration control achieved by physical control of every network
host is harder than it looks in a world of engineering workstations.

Brian's comments are very thoughtful and thought-provoking.  He describes
expected responses of human beings to typical current-day operating system
designs.  The observations he makes can't be dismissed so easily.

                                       Jerry Saltzer

------------------------------

Date: Mon, 22 Sep 1986  23:03 EDT
From: Rob Austein <[email protected]>
To:   [email protected]
Subject: Massive UNIX breakins at Stanford
RISKS-LIST: RISKS-FORUM Digest,  Monday, 22 September 1986  Volume 3 : Issue 62

I have to take issue with Scott Preece's statement that "the fault
lies in allowing an uncontrolled machine to have full access to the
network".  This may be a valid approach on a small isolated network or
in the military, but it fails horribly in the world that the rest of
us have to live in.  For example, take a person (me) who is
(theoreticly) responsible for what passes for security on up to half a
dozen mainframes at MIT (exact number varies).  Does he have any
control over what machines are put onto the network even across the
street on the MIT main campus?  Hollow laugh.  Let alone machines at
Berkeley or (to use our favorite local example) the Banana Junior
6000s belonging to high school students in Sunnyvale, California.

As computer networks come into wider use in the private sector, this
problem will get worse, not better.  I'm waiting to see when AT&T
starts offering a long haul packet switched network as common carrier.

Rule of thumb: The net is intrinsicly insecure.  There's just too much
cable out there to police it all.  How much knowledge does it take to
tap into an ethernet?  How much money?  I'd imagine that anybody with
a BS from a good technical school could do it in a week or so for
under $5000 if she set her mind to it.

As for NFS... you are arguing my case for me.  The NFS approach to
security seems bankrupt for just this reason.  Same conceptual bug,
NFS simply agravates it by making heavier use of the trusted net
assumption.

Elsewhere in this same issue of RISKS there was some discussion about
the dangers of transporting passwords over the net (by somebody other
than Scott, I forget who).  Right.  It's a problem, but it needn't be.
Passwords can be tranmitted via public key encryption or some other
means.  The fact that most passwords are currently transmitted in
plaintext is an implementation problem, not a fundamental design
issue.

A final comment and I'll shut up.  With all this talk about security
it is important to keep in mind the adage "if it ain't broken, don't
fix it".  Case in point.  We've been running ITS (which has to be one
of the -least- secure operating systems ever written) for something
like two decades now.  We have surprisingly few problems with breakins
on ITS.  Seems that leaving out all the security code made it a very
boring proposition to break in, so almost nobody bothers (either that
or they are all scared off when they realize that the "command
processor" is an assembly language debugger ... can't imagine why).
Worth thinking about.  The price paid for security may not be obvious.

--Rob Austein <[email protected]>

------------------------------

Date: Mon 22 Sep 86 11:07:04-PDT
From: Andy Freeman <[email protected]>
Subject: Massive UNIX breakins at Stanford
To: [email protected], preece%[email protected]
RISKS-LIST: RISKS-FORUM Digest,  Monday, 22 September 1986  Volume 3 : Issue 62

Scott E. Preece <preece%[email protected]> writes in RISKS-3.60:

   [email protected] (Brian Reid) writes:
       The issue here is that a small leak on some [unknown]
       inconsequential machine in the dark corners of campus was
       allowed to spread to other machines because of the networking code.

   No, you're still blaming the networking code for something it's not
   supposed to do.  The fault lies in allowing an uncontrolled machine to
   have full access to the network.  The NCSC approach to networking has
   been just that: you can't certify networking code as secure, you can
   only certify a network of machines AS A SINGLE SYSTEM.  That's pretty
   much the approach of the Berkeley code, with some grafted on
   protections because there are real-world situations where you have to
   have some less-controlled machines with restricted access.  The
   addition of NFS makes the single-system model even more necessary.

Then NCSC certification means nothing in many (most?) situations.  A
lot of networks cross adminstrative boundaries.  (The exceptions are
small companies and military installations.)  Even in those that
seemingly don't, phone access is often necessary.

Network access should be as secure as phone access.  Exceptions may
choose to disable this protection but many of us won't.  (If Brian
didn't know about the insecure machine, it wouldn't have had a valid
password to access his machine.  He'd also have been able to choose
what kind of access it had.)  The only additional problem that
networks pose is the ability to physically disrupt other's
communication.

-andy             [There is some redundancy in these contributions,
                  but each makes some novel points.  It is better
                  for you to read selectively than for me to edit. PGN]

------------------------------

Date: 22 Sep 1986 16:24-CST
From: "Scott E. Preece" <preece%[email protected]>
Subject: Massive UNIX breakins at Stanford (RISKS-3.60)
To: [email protected], RISKS%[email protected]
RISKS-LIST: RISKS-FORUM Digest,  Monday, 22 September 1986  Volume 3 : Issue 62

       Andy Freeman writes [in response to my promoting the view
       of a network as a single system]:

>       Then NCSC certification means nothing in many (most?) situations.
--------

Well, most sites are NOT required to have certified systems (yet?). If they
were, they wouldn't be allowed to have non-complying systems.  The view as a
single system makes the requirements of the security model feasible.  You
can't have anything in the network that isn't part of your trusted computing
base.  This seems to be an essential assumption.  If you can't trust the
code running on another machine on your ethernet, then you can't believe
that it is the machine it says it is, which violates the most basic
principles of the NCSC model. (IMMEDIATE DISCLAIMER: I am not part of the
group working on secure operating systems at Gould; my knowledge of the area
is superficial, but I think it's also correct.)
                  [NOTE: The word "NOT" in the first line of this paragraph
                   was interpolated by PGN as the presumed intended meaning.]

--------
       Network access should be as secure as phone access.  Exceptions may
       choose to disable this protection but many of us won't.  (If Brian
       didn't know about the insecure machine, it wouldn't have had a valid
       password to access his machine.  He'd also have been able to choose
       what kind of access it had.)  The only additional problem that
       networks pose is the ability to physically disrupt other's
       communication.
--------

Absolutely, network access should be as secure as phone access,
IF YOU CHOOSE TO WORK IN THAT MODE.  Our links to the outside
world are as tightly restricted as our dialins.  The Berkeley
networking software is set up to support a much more integrated
kind of network, where the network is treated as a single system.
For our development environment that is much more effective.
You should never allow that kind of access to a machine you don't
control.  Never.  My interpretation of the original note was that
the author's net contained machines with trusted-host access
which should not have had such access; I contend that that
represents NOT a failing of the software, but a failing of the
administration of the network.

scott preece
gould/csd - urbana, uucp:       ihnp4!uiucdcs!ccvaxa!preece

------------------------------

Date: Tue, 23 Sep 86 09:16:21 cdt
From: "Scott E. Preece" <preece%[email protected]>
To: [email protected]
Subject: Massive UNIX breakins at Stanford
RISKS-LIST: RISKS-FORUM Digest Wednesday, 24 September 1986 Volume 3 : Issue 63

  [This was an addendum to Scott's contribution to RISKS-3.61.  PGN]

I went back and reviewed Brian Reid's initial posting and found myself more
in agreement than disagreement.  I agree that the Berkeley approach offers
the unwary added opportunities to shoot themselves in the foot and that
local administrators should be as careful of .rhosts files as they are of
files that are setuid root; they should be purged or justified regularly.

I also agree that it should be possible for the system administrator to turn
off the .rhosts capability entirely, which currently can only be done in the
source code and that it would be a good idea to support password checks (as
a configuration option) on rcp and all the other remote services.

scott preece, gould/csd - urbana, uucp: ihnp4!uiucdcs!ccvaxa!preece

------------------------------

Date: Tue, 23 Sep 86 08:41:29 cdt
From: "Scott E. Preece" <preece%[email protected]>
To: [email protected]
Subject: Re: Massive UNIX breakins at Stanford
RISKS-LIST: RISKS-FORUM Digest Wednesday, 24 September 1986 Volume 3 : Issue 63

 > From: Rob Austein <[email protected]>

 > I have to take issue with Scott Preece's statement that "the fault lies
 > in allowing an uncontrolled machine to have full access to the network"...

I stand by what I said, with the important proviso that you notice the word
"full" in the quote.  I took the description in the initial note to mean
that the network granted trusted access to all machines on the net.  The
Berkeley networking code allows the system administrator for each machine to
specify what other hosts on the network are to be treated as trusted and
which are not.  The original posting spoke of people on another machine
masquerading as different users on other machines; that is only possible if
the (untrustworthy) machine is in your hosts.equiv file, so that UIDs are
equivalenced for connections from that machine.  If you allow trusted access
to a machine you don't control, you get what you deserve.

Also note that by "the network" I was speaking only of machines intimately
connected by ethernet or other networking using the Berkeley networking
code, not UUCP or telephone connections to which normal login and password
checks apply.

The description in the original note STILL sounds to me like failure of
administration rather than failure of the networking code.

scott preece

   [OK.  Enough on that.  The deeper issue is that most operating
    systems are so deeply flawed that you are ALWAYS at risk.  Some
    tentative reports of Trojan horses discovered in RACF/ACF2 systems
    in Europe are awaiting details and submission to RISKS.  But their
    existence should come as no surprise.  Any use of such a system in
    a hostile environment could be considered a failure of administration.
    But it is also a shortcoming of the system itself...  PGN]

------------------------------

Date: Mon 22 Sep 86 17:09:27-PDT
From: Andy Freeman <[email protected]>
Subject: UNIX and network security again
To: preece%[email protected]
cc: RISKS%[email protected]
RISKS-LIST: RISKS-FORUM Digest Wednesday, 25 September 1986 Volume 3 : Issue 65

preece%[email protected] (Scott E. Preece) writes:

   If you can't trust the code running on another machine on your
   ethernet, then you can't believe that it is the machine it says it is,
   which violates the most basic principles of the NCSC model.

That's why electronic signatures are a good thing.

   I wrote (andy@sushi):
   >   Then NCSC certification means nothing in many (most?) situations.

   Well, most sites are required to have certified systems (yet?). If
   they were, they wouldn't be allowed to have non-complying systems.

The designers of the Ford Pinto were told by the US DOT to use $x as a
cost-benefit tradeoff point for rear end collisions.  Ford was still
liable.  I'd be surprised if NCSC certification protected a company
from liability.  (In other words, being right can be more important
than complying.)

      [This case was cited again by Peter Browne (from old Ralph Nader
       materials?), at a Conference on Risk Analysis at NBS 15 September
       1986:  Ford estimated that the Pinto gas tank would take $11 each to
       fix in 400,000 cars, totalling $4.4M.  They estimated 6 people might
       be killed as a result, at $400,000 each (the going rate for lawsuits
       at the time?), totalling $2.4M.  PGN]

   Absolutely, network access should be as secure as phone access, IF YOU
   CHOOSE TO WORK IN THAT MODE.  Our links to the outside world are as
   tightly restricted as our dialins.  The Berkeley networking software
   is set up to support a much more integrated kind of network, where the
   network is treated as a single system.  For our development
   environment that is much more effective.  You should never allow that
   kind of access to a machine you don't control.  Never.  My
   interpretation of the original note was that the author's net
   contained machines with trusted-host access which should not have had
   such access; I contend that that represents NOT a failing of the
   software, but a failing of the administration of the network.

My interpretation of Brian's original message is that he didn't have a
choice; Berkeley network software trusts hosts on the local net.  If
that's true, then the administrators didn't have a chance to fail; the
software's designers had done it for them.  (I repeated all of Scott's
paragraph because I agree with most of what he had to say.)

-andy

   [I think the implications are clear.  The network software is weak.
    Administrators are often unaware of the risks.  Not all hosts are
    trustworthy.  The world is full of exciting challenges for attackers.
    All sorts of unrealistic simplifying assumptions are generally made.
    Passwords are typically stored or transmitted in the clear and easily
    readable or obtained -- or else commonly known.  Encryption is still
    vulnerable if the keys can be compromised (flawed key distribution,
    unprotected or subject to bribable couriers) or if the algorithm is
    weak.  There are lots of equally devastating additional vulnerabilities
    waiting to be exercised, particularly in vanilla UNIX systems and
    networks thereof.  Remember all of our previous discussions about not
    trying to put the blame in ONE PLACE.  PGN]

------------------------------

From: [email protected] (Brian Reid)
Date: 25 Sep 1986 0014-PDT (Thursday)
To: [email protected]
Reply-To: [email protected]
Subject: Follow-up on Stanford breakins: PLEASE LISTEN THIS TIME!
RISKS-LIST: RISKS-FORUM Digest Thursday, 25 September 1986  Volume 3 : Issue 66

  "What experience and history teach is that people have never learned
   anything from history, or acted upon principles deduced from it."
               -- Georg Hegel, 1832

Since so many of you are throwing insults and sneers in my direction, I feel
that I ought to respond. I am startled by how many of you did not understand
my breakin message at all, and in your haste to condemn me for "asking for
it" you completely misunderstood what I was telling you, and why.

I'm going to be a bit wordy here, but I can justify it on two counts. First,
I claim that this topic is absolutely central to the core purpose of RISKS (I
will support that statement in a bit). Second, I would like to take another
crack at making you understand what the problem is. I can't remember the
names, but all of you people from military bases and secure installations who
coughed about how it was a network administration failure are completely
missing the point. This is a "risks of technology" issue, pure and simple.

As an aside, I should say that I am not the system manager of any of the
systems that was broken into, and that I do not control the actions of any
of the users of any of the computers. Therefore under no possible explanation
can this be "my fault". My role is that I helped to track the intruders down,
and, more importantly, that I wrote about it.

I am guessing that most of you are college graduates. That means that you
once were at a college. Allow me to remind you that people do not need badges
to get into buildings. There are not guards at the door. There are a large
number of public buildings to which doors are not even locked. There is not a
fence around the campus, and there are not guard dogs patrolling the
perimeter.

The university is an open, somewhat unregulated place whose purpose is the
creation and exchange of ideas. Freedom is paramount. Not just academic
freedom, but physical freedom. People must be able to walk where they need to
walk, to see what they need to see, to touch what they need to touch.
Obviously some parts of the university need to be protected from some people,
so some of the doors will be locked. But the Stanford campus has 200
buildings on it, and I am free to walk into almost any of them any time that
I want. More to the point, *you* are also free to walk into any of them.

Now let us suppose that I am walking by the Linguistics building and I notice
that there is a teenager taking books out of the building and putting them in
his car, and that after I watch for a short while, I conclude that he is not
the owner of the books. I will have no trouble convincing any policeman that
the teenager is committing a crime. More important, if this teenager has had
anything resembling a normal upbringing in our culture, I will have no
trouble convincing the teenager that he is committing a crime. Part of the
training that we receive as citizens in our society is a training in what is
acceptable public behavior and what is not. The books were not locked up, the
doors to the library were not locked, but in general people do not run in and
steal all of the books.

Or let me suppose instead that I am a reporter for the Daily News. I have a
desk in a huge room full of desks. Most of the desks are empty because the
other reporters are out on a story. You've seen scenes like this in the
movies. It is rare in small towns to find those newsrooms locked. Here in
Palo Alto I can walk out of my office, walk over to the offices of the Times
Tribune a few blocks away, walk in to the newsroom, and sit down at any of
those desks without being challenged or stopped. There is no guard at the
door, and the door is not locked. There are 50,000 people in my city, and
since I have lived here not one of them has walked into the newsroom and
started destroying or stealing anything, even though it is not protected.
Why not? Because the rules for correct behavior in our society, which are
taught to every child, include the concept of private space, private
property, and things that belong to other people. My 3-year-old daughter
understands perfectly well that she is not to walk into neighbors' houses
without ringing the doorbell first, though she doesn't quite understand why.

People's training in correct social behavior is incredibly strong, even
among "criminals". Murderers are not likely to be litterbugs. Just because
somebody has violated one taboo does not mean that he will immediately and
systematically break all of them.

In some places, however, society breaks down and force must be used. In the
Washington Square area of New York, for example, near NYU, you must lock
everything or it will be stolen.  At Guantanamo you must have guards or the
Cubans will come take things. But in Palo Alto, and in Kansas and in Nebraska
and Wisconsin and rural Delaware and in thousands of other places, you do not
need to have guards and things do not get stolen.

I'm not sure what people on military bases use computer networks for, but
here in the research world we use computer networks as the building blocks of
electronic communities, as the hallways of the electronic workplace. Many of
us spend our time building network communities, and many of us spend our time
developing the technology that we and others will use to build network
communities. We are exploring, building, studying, and teaching in an
electronic world. And naturally each of us builds an electronic community
that mirrors the ordinary community that we live in. Networks in the Pentagon
are built by people who are accustomed to seeing soldiers with guns standing
in the hallway. Networks at Stanford are built by people who don't get out of
bed until 6 in the evening and who ride unicycles in the hallways.

Every now and then we get an intruder in our electronic world, and it
surprises us because the intruder does not share our sense of societal
responsibilities. Perhaps if Stanford were a military base we would simply
shoot the intruder and be done with it, but that is not our way of doing
things. We have two problems. One is immediate--how to stop him, and how
to stop people like him. Another is very long-term: how to make him and his
society understand that this is aberrant behavior.

The result of all of this is that we cannot, with 1986 technology, build
computer networks that are as free and open as our buildings, and therefore
we cannot build the kind of electronic community that we would like.

I promised you that I would justify what this all has to do with RISKS.

We are developing technologies, and other people are using those
technologies. Sometimes other people misuse them. Misuse of technology is one
of the primary risks of that technology to society. When you are engineering
something that will be used by the public, it is not good enough for you to
engineer it so that if it is used properly it will not hurt anybody. You must
also engineer it so that if it is used *improperly* it will not hurt anybody.
I want to avoid arguments of just where the technologist's responsibility
ends and the consumer's responsibility begins, but I want to convince you,
even if you don't believe in the consumer protection movement, that there is
a nonzero technologist's responsibility.

Let us suppose, for example, that you discovered a new way to make
screwdrivers, by making the handles out of plastic explosives, so that the
screwdriver would work much better under some circumstances. In fact, these
screwdrivers with the gelignite handles are so much better at putting in
screws than any other screwdriver ever invented, that people buy them in
droves. They have only one bug: if you ever forget that the handle is
gelignite, and use the screwdriver to hit something with, it will explode and
blow your hand off. You, the inventor of the screwdriver, moan each time you
read a newspaper article about loss of limb, complaining that people
shouldn't *do* that with your screwdrivers.

Now suppose that you have invented a great new way to make computer networks,
and that it is significantly more convenient than any other way of making
computer networks. In fact, these networks are so fast and so convenient that
everybody is buying them. They have only one bug: if you ever use the network
to connect to an untrusted computer, and then if you also forget to delete
the permissions after you have done this, then people will break into your
computer and delete all of your files. When people complain about this, you
say "don't connect to untrusted computers" or "remember to delete the files"
or "fire anyone who does that".

Dammit, it doesn't work that way. The world is full of people who care only
about expediency, about getting their screws driven or their nets worked. In
the heat of the moment, they are not going to remember the caveats. People
never do. If the only computers were on military bases, you could forbid
the practice and punish the offenders. But only about 0.1% of the computers
are on military bases, so we need some solutions for the rest of us.

Consider this scenario (a true story). Some guy in the Petroleum Engineering
department buys a computer, gets a BSD license for it, and hires a Computer
Science major to do some systems programming for him. The CS major hasn't
taken the networks course yet and doesn't know the risks of breakins. The
petroleum engineer doesn't know a network from a rubber chicken, and in
desperation tells the CS student that he can do whatever he wants as long as
the plots are done by Friday afternoon. The CS student needs to do some
homework, and it is much more convenient for him to do his homework on the
petroleum computer, so he does his homework there. Then he needs to copy it
to the CS department computer, so he puts a permission file in his account on
the CSD computer that will let him copy his homework from the petroleum
engineering computer to the CSD computer. Now the CS student graduates and
gets a job as a systems programmer for the Robotics department, and his
systems programmer's account has lots of permissions. He has long since
forgotten about the permissions file that he set up to move his homework last
March. Meanwhile, somebody breaks into the petroleum engineering computer,
because its owner is more interested in petroleum than in computers and
doesn't really care what the guest password is. The somebody follows the
permission links and breaks into the robotics computer and deletes things.

Whose fault is this? Who is to blame? Who caused this breakin? Was it the
network administrator, who "permitted" the creation of .rhosts files? Was it
the person who, in a fit of expedience, created /usr/local/bin with 0776
protection? Was it the idiot at UCB who released 4.2BSD with /usr/spool/at
having protection 0777? Was it the owner of the petroleum engineering
computer? Was it the mother of the kid who did the breaking in, for failing
to teach him to respect electronic private property? I'm not sure whose fault
it is, but I know three things:

1) It isn't my fault (I wasn't there). It isn't the student's fault (he
   didn't know any better--what can you expect for $5.75/hour). It isn't the
   petroleum engineer's fault (NSF only gave him 65% of the grant money he
   asked for and he couldn't afford a full-time programmer). Maybe you could
   argue that it is the fault of the administrator of the CSD machine, but in
   fact there was no administrator of the CSD machine because he had quit to
   form a startup company. In fact, it is nobody's fault.

2) No solution involving authority, management, or administration will work
   in a network that crosses organization boundaries.

3) If people keep designing technologies that are both convenient and
   dangerous, and if they keep selling them to nonspecialists, then
   expedience will always win out over caution. Convenience always wins,
   except where it is specifically outlawed by authority. To me, this is
   one of the primary RISKs of any technology. What's special about
   computers is that the general public does not understand them well
   enough to evaluate the risks for itself.

------------------------------

Date: Wed, 24 Sep 86 09:35:37 pdt
From: [email protected] (Darrel VanBuer)
Organization: System Development Corporation R&D, Santa Monica
To: hplabs!CSL.SRI.COM!RISKS
Subject: Re: Stanford breakin, RISKS-3.62 DIGEST
RISKS-LIST: RISKS-FORUM Digest  Thursday 25 September 1986  Volume 3 : Issue 67

I think many of the respondents misunderstand what went wrong: there was no
failure in the 4.2 trusted networking code.  It correctly communicated the
message that "someone logged in as X at Y wants to run program Z at W".  The
failure of security was that
 1)  the "someone" was not in fact X because of some failure of security
     (e.g. poor password).
 2)  the real X who had legitimate access on W had previously created a file
     under some user id at W saying X at Y is an OK user.
 3)  the real X was lazy about withdrawing remote privileges (not essential,
     but widens the window of opportunity.

There's a tough tradeoff between user convenience in a networked environment
and security.  Having to enter a password for every remote command is too
arduous for frequent use.  Interlisp-D has an interesting approach:
 1.  Try a generic userid and password.
 2.  Try a host-specific userid and password.
In either case, if it does not have these items in its cache, it prompts the
user.  The cache is cleared on logout and at certain other times which
suggest the user has gone away (e.g. 20 minutes without activity).
Passwords are never stored in long term or publically accessible locations.
It's also less convenient than 4.2 since you need to resupply IDs after
every cache flush.  It also has the opening for lazy users to use the same
ID and password at every host so that the generic entry is enough.

Darrel J. Van Buer, PhD, System Development Corp., 2525 Colorado Ave
Santa Monica, CA 90406, (213)820-4111 x5449
..{allegra,burdvax,cbosgd,hplabs,ihnp4,orstcs,sdcsvax,ucla-cs,akgua}
                                                           !sdcrdcf!darrelj
[email protected]

------------------------------

To: [email protected]
Subject: Unix breakins - secure networks
Date: 24 Sep 86 13:46:39 PDT (Wed)
From: "David C. Stewart" <davest%[email protected]>
RISKS-LIST: RISKS-FORUM Digest,  Friday, 26 September 1986  Volume 3 : Issue 68

       One of the observations that have been made in the wake of the
Stanford breakin is that Berkeley Unix encourages the assumption that
the network itself is secure when in fact, it is not difficult to imagine
someone tapping the ethernet cable and masquerading as a trusted host.

       I have been intrigued by work that has been going on at CMU to
support the ITC Distributed File System.  (In the following, Virtue is
the portion of the filesystem running on a workstation and Vice is
that part running on the file server.)

       The authentication and secure transmission functions are
       provided as part of a connection-based communication package,
       based on the remote procedure call paradigm.  At connection
       establishment time, Vice and Virture are viewed as mutually
       suspicious parties sharing a common encryption key.  This key
       is used in an authentication handshake, at the end of which
       each party is assured of the identity of the other.  The final
       phase of the handshake generates a session key which is used
       for encrypting all further communication on the connection.
       The use of per-session encryption keys reduces the risk of
       exposure of authentication keys. [1]

       The paper goes on to state that the authorization key may be
supplied by a password (that generates the key but is not sent along
the wire in cleartext) or may be on a user-supplied magnetic card.

       This is one of the few systems I have seen that does not trust
network peers implicitly.  A nice possibility when trying to reduce
the risks involved with network security.

Dave Stewart - Tektronix Unix Support - [email protected]

[1] "The ITC Distributed File System: Principles and Design",
Operating Systems Review, 19, 5, p. 43.

------------------------------

From: Dave Taylor <taylor%[email protected]>
To: RISKS@sri-csl (The Risks Mailing Group)
Date: Fri, 26 Sep 86 17:55:53 PDT
Subject: Comment on the reaction to Brians Breakin Tale
Organization: Hewlett-Packard Laboratories, Unix Networking Group
Work-Phone-Number: +1 415 857-6887
RISKS-LIST: RISKS-FORUM Digest,  Friday, 26 September 1986  Volume 3 : Issue 68

I have to admit I am also rather shocked at the attitudes of most of the
people responding to Brian Reids' tale of the breakin at Stanford.  What
these respondents are ignoring is The Human Element.

Any system, however secure and well designed, is still limited by the
abilities, morals, ethics, and so on of the Humans that work with it.  Even
the best paper shredder, for example, or the best encryption algorithm, isn't
much good if the person who uses it doesn't care about security (so they shred
half the document and get bored, or use their husbands' first name as the
encryption key).

The point here isn't to trivialize this, but to consider and indeed, PLAN FOR
the human element.

I think we need to take a step back and think about it in this forum...

                                               -- Dave

------------------------------

End of RISKS-FORUM Digest  EXCERPTS ON UNIX BREAKINS AT STANFORD
************************
14-Oct-86 05:36:56-PDT,2516;000000000011
Return-Path: <minow%[email protected]>
Received: from decwrl.dec.com by CSL.SRI.COM with TCP; Tue 14 Oct 86 05:36:48-PDT
Received: by decwrl.dec.com (5.54.2/4.7.34)
       id AA26218; Tue, 14 Oct 86 05:37:16 PDT
Message-Id: <[email protected]>
Date: 14-Oct-1986 0834
From: minow%[email protected]  (Martin Minow, DECtalk Engineering ML3-1/U47 223-9922)
To: [email protected]
Subject: "Pink Floyd" -- (slightly) more on the Stanford breakin

(From Usenet, originally posted by Werner Uhrig @ ut-ngp.uucp):

Path: decvax!ucbvax!ucbcad!nike!think!husc6!ut-sally!ut-ngp!werner
From: [email protected] (Werner Uhrig)
Newsgroups: misc.headlines
Subject: "Pink Floyd" HACKER HITS UNIVERSITY COMPUTERS
Date: Thu, 9-Oct-86 23:24:29 EDT


[from the Sunday paper - I hate it when they use 'hacker' instead of 'cracker']

       'Pink Floyd' attacks lack clear motive

SAN FRANCISCO - A sophisticated computer hacker who calls himself "Pink Floyd"
has broken into dozens of university and business computers around the nation
and taunted the experts who have tried to thwart him.

The hacker reportedly has used telephone connections to break into computers
at Stanford University, Lawrence Berkeley Laboratory, the University of
Illinois, MIT, Mitre, and at least 3 unidentified Silicon Valley companies.

The intruder began the break-ins Aug. 25.  Some of the computers contain
military and government information, ....

However, a computer official at Stanford speculated that the hacker may be
using his extraordinary skill to make a point, since no damage to files or
programs has been found.

"Pink Floyd" has made only subtle alterations to some systems to make
detecting his intrusions more difficult ....

Stanford and others have spent thousands of dollars to improve security as a
result.

Stanford officials said the hacker has tapped into as many as 60 campus
computers, some of which include systems that contain non-classified,
Pentagon-sponsored research data and programs.

The intruder, described by one computer scientist as fitting the profile of a
computer-science graduate student, has called Standford officials and carried
on a phone conversation with them while breaking security protection in campus
computers.

"This is the most pesky kind of case, involving people trying to get into
systems rather than do damage," said Jay BloomBecker, director of the
National Center for Computer Crime Data in Los Angeles.