Subject: RISKS DIGEST 15.61
REPLY-TO: [email protected]

RISKS-LIST: RISKS-FORUM Digest  Thursday 3 February 1994  Volume 15 : Issue 61

        FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS
  ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

*********************************************************************
***** FTP ARCHIVE SUBDIRECTORIES NOW EXIST FOR PREVIOUS VOLUMES. *****
***** See last item for information on RISKS (comp.risks) ***********
*********************************************************************

 Contents:
More on the $1M deposited in bank error (Raju Varghese)
Internet Trojans (Mich Kabay)
Biometrics (Paul Robinson)
Re: Aldrich Ames and the Clipper Project... (Peter Wayner)
Some Thoughts on Clipper, NSA, and one key-escrow alternative (Jim Bidzos)
Algorithms have unclear boundaries (Mike Crawford)
Response from Cambridgeshire Constabulary (Lawrence Kestenbaum)
Re: Threatening E-mail (Peter B Ladkin)
Re: Bounties for Safety-Critical Software (Peter B Ladkin)
Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc.

----------------------------------------------------------------------

Date: Tue, 1 Mar 94 19:22:37 +0100
From: [email protected] (Raju Varghese)
Subject: More on the $1M deposited in bank error (RISKS-15.60)

>The Bank of Stockton (California) accidentally turned Mohammed Idrees
>Kussair's deposit of $100,000 into $1M.  ...

Did the bank say why they converted $100,000 to $1,000,000? Could it be due to
the following: in India & Pakistan the placement of commas on large numbers is
different from the west. The power-of-ten table and their names are shown
below:

       ten             10
       hundred         100
       thousand        1000
       ten thousand    10,000
       one lakh        1,00,000
       ten lakhs       10,00,000
       one crore       1,00,00,000

Could he have deposited a check which was made out for $1,00,000 but got
`corrected' by the bank because an `obvious' error was made in the number of
zeroes.

Just a wild guess...  Raju Varghese  [email protected]

 [No lakh of confusion.  Mighty lakh arose?  A rose is arose is arisen.  PGN]

------------------------------

Date: 28 Feb 94 21:35:19 EST
From: "Mich Kabay / JINBU Corp." <[email protected]>
Subject: Internet Trojans

>From the Washington Post newswire, 17 February 1994, via Executive News
Service (GO ENS) on CompuServe:

"Break-Ins Prompt A Search for Better Computer Security
By John Burgess, Washington Post Staff Writer

   Since the dawn of the electronic age, the computer password has been a
trusted guardian of secrets large and small.  For many people, obtaining their
own password became a rite of initiation into computer culture itself.  Now,
growing numbers of security experts feel that the password in its common form
is too old and unsophisticated for the job."

The author continues with the following key points:

o    The IAB (Internet Architecture Board) is now arguing for a change to
    alternative methods of identification and authentication (I&A).

o    Proposals include hand-held one-time password generators and smart cards.

o    Concern is growing over the practice of sending cleartext passwords
    through unencrypted communications channels.

o    Trojan Horse programs can capture unencrypted passwords and store them
    for misuse by criminal hackers.

o    The most recent attack described in a CERT-CC (Computer Emergency
    Response Team Coordination Center) Advisory involved a Trojan.

o    There is as yet no evidence of widespread use of the captured passwords.

o    Another approach to I&A is the challenge-response system, in which
    the computer logging on to another system has to perform calculations
    based on data and algorithms stored on the client and the server machines.

o    Other methods focus on biometric I&A.

o    In general, current use of passwords needs to be improved: users
    should select passwords with strong random components to prevent
    dictionary attacks and should guard their passwords closely against
    inadvertent or deliberate disclosure.

Michel E. Kabay, Ph.D., Director of Education, National Computer Security Assn

------------------------------

Date: Wed, 2 Mar 1994 22:42:24 -0500 (EST)
From: Paul Robinson <[email protected]>
Subject: Biometrics

'Biometrics' refers to the use of physical charactersistics as identification.
Human beings use this in that when we see a friend, we identify them by face,
size, hair color, etc.  Changes in Biometric data usually return an
identification when positive ("Are you losing weight?", "Gee your hair looks
teriffic") while negative changes are usually not stated publicly ("I see he's
getting married, but his bride-to-be looks somewhat plumper than before;
perhaps they _had_ to...").

However, when someone else needs to identify you and doesn't know you, they
usually have to rely on authentication.  Usual forms of authentication are
various forms of paper, photographic/multimedia, and/or magnetic
authentication issued by a government or trusted third-party.

With the increased sophistication of duplicating equipment, relyance on
documentary authentication is becoming unreliable.  Witness the fact that
anyone giving out a social security number is presumed to be the holder of
that number.  When they aren't, the actual holder is usually chagrinned to
find out how much expense and damage they have to suffer to rectify the
situation.

With this, various organizations are working on means of real-time automatic
biometric identification of individuals.  The implications of this can be both
good and bad.  As the actual article is rather complicated, I'll summarize it
in a separate article here.

The dangers to people is that if, for example, biometric photographic
measurements are used, that real-time tracking of people could be done as the
technology gets cheaper.  Further, you may never even know that you've been
tracked unless and until something happens that it comes to your attention.

Paul Robinson - [email protected]

------------------------------

Date: Thu, 3 Mar 1994 12:54:01 -0500
From: Peter Wayner <[email protected]>
Subject: Re: Aldrich Ames and the Clipper Project...

What does the Aldrich Ames Spy Case mean about Clipper? In an earlier posting,
I thought it just proved that people could be bought and this meant that the
Clipper escrow keys weren't that safe.

Yesterday, I spoke with a Clipper Fan who said that Ames was an isolated case
and Ames wouldn't have a need-to-know about Clipper keys. I'm not so sure.

The fact is that someone in Ames's position would be one of the first people
who would need access to the Clipper keys. The Department of Justice and other
parts of the government have already placed large orders for the Clipper
capable phones. The anti-US government spies would be operating in this arena.
This is precisely where Ames should have been looking for spies to counter. If
everyone at the DOJ and the FBI is going to be using Clipper, then Ames would
be one of the first with a need-to-know.

The private sector response is much cooler and local cops will probably go
years before needing access to Clipper keys.

Now consider another important fact. The Escrow Key Kops won't have a list
matching names with Clipper Chip ID numbers in order to prevent them from
abusing their keylist.  (Memo to NIST: Don't measure response time to tap
requests in Keystones.) They won't be able to tell if a spy in Ames position
is asking for the Clipper Key to some long sought after mole or the target
dictated by his foreign masters. Such a spy could have complete and total
access to any Clipper-protected conversation they wanted.

People designing the Clipper system must not only consider the security of the
two Escrow Key Centers, but they must also look at who gets access to the
keys. This is going to be the weak link and if Clipper becomes standard there
will be plenty of potential weak links around.

------------------------------

Date: Thu, 3 Mar 94 09:43:58 PST
From: [email protected] (Jim Bidzos)
Subject: SOME THOUGHTS ON CLIPPER, NSA, AND ONE KEY-ESCROW ALTERNATIVE

In a recent editorial, Dr. Dorothy Denning of Georgetown University argued in
support of the U.S. government's proposed Clipper Chip, a security device that
would allow law enforcement to decipher the communications of users of such
devices.

Dr. Denning attempts to argue that Clipper is necessary for law enforcement
agencies to be able to do their job.  I'm not going to argue that one; there
are plenty of people who can argue that compromising privacy for all citizens
in order        to aid law enforcement is a bad idea more effectively than I,
particularly in the Clipper case, where the arguments from law enforcement are
dubious at best.  (The current justification is inadequate; there may be
better reasons, from a law enforcement perspective, but we haven't heard them
yet.)

Without doubt, law enforcement and intelligence are huge stakeholders in the
debate over encryption. But every individual and corporation in the U.S. must
be included as well. Are NSA's actions really in the best interests of all the
stakeholders? Are there alternatives to the current key escrow program?

If one steps back and looks at what has happened over the last few years, one
might well question the approaches, if not the motivation.  (I believe it may
even be possible to conclude that Clipper is the visible portion of a
large-scale covert operation on U.S. soil by the National Security Agency.)
Over a number of years, through their subversion of the Commerce Department
(who should be championing the causes of U.S.  industry, not the intelligence
agencies), NSA has managed to put many U.S. government resources normally
beyond their control, both legally and practically, to work on their program
of making U.S. and international communications accessible.

The first step was the MOU (Memorandum of Understanding) between NIST NSA.
This document appears to contravene the provisions of the Computer Security
Act of 1987, the intent of which was to give NIST control over
standards-making for the unclassified government and commercial sectors.  The
MOU essentially gave NSA a veto over any proposals for crypto standards by
NIST.

By using the standards making authority of the National Institute of Standards
and Technology (NIST), NSA is attempting to force the entire U.S.  government
to purchase Clipper equipment since only NIST-standard equipment may be
purchased by government agencies. This purchasing power can then be used to
force U.S. manufacturers to build Clipper products or risk losing government
business.  (GSA is currently questioning NSA's authority to control
government-wide procurement, and should continue to do so.)  This of course
not only subsidizes Clipper products, but could make Clipper a de facto
standard if the costs associated with alternatives are too high.  These costs
to industry, of ignoring Clipper, come in the form of lost government market
share, costly support for multiple versions of incompatible products, and
non-exportability of non-Clipper products.

It also appears that NSA is desperately seeking a digital signature standard
that would force users to take that signature capability wrapped up with a
Clipper chip.  If this is the case, as it appears to be, then NSA has is
trying to use what is probably the most powerful business tool of the
information age as a means to deny us its benefits unless we subsidize and
accept Clipper in the process.  This would, if true, be an unprecedented abuse
of government power to influence U.S.  industry and control individual
privacy.  (Clipper is part of a chip called Capstone, which is where their
proposed digital signature standard would be used.)

The overall cost of these policies is unknown.  We only know that NSA has
spent a considerable amount of money on the program directly.  Other costs are
not so obvious. They are:

* A burdened U.S. industry, which will have to build multiple products
 or more expensive products that support multiple techniques;

* A low-intensity "trade war" with the rest of the world over
 encryption;

* Lost sales to U.S. companies, since international buyers will surely go to
 non-U.S. suppliers for non- Clipper encryption, as may buyers in the U.S.;

* Potential abuses by government and loss of privacy for all citizens.

Does NSA truly believe they can displace other methods with Clipper?  With
over three million RSA products, the technology they feel threatened by, in
use in the U.S. today? Not likely; therefore, they have already decided that
these costs are acceptable even if they only delay the inevitable, and that
U.S. industry and U.S. taxpayers should bear these costs, whatever they are.
This policy was apparently developed by unelected people who operate without
oversight or accountability.  Does the White House really support this policy?
(Does this all sound familiar?)

It has been rumored that NSA will gain support from foreign governments for
escrow technology, especially if "local control" is provided.  Even if NSA can
convince their sister organizations around the world to support key escrow (by
offering Clipper technology with a do-your-own-escrow option), will these
other governments succeed in selling it to their industry and citizens?  Most
countries around the world have much stronger privacy laws and a longer
history of individual privacy than the U.S.

WHY AGAIN WHEN IT DIDN'T WORK THE FIRST TIME?

Many seem to have forgotten or are not aware that the Clipper program is not
new, and it's also not the first time NSA has attempted to force
communications security on U.S. industry that it could compromise.  In the
mid-80's, NSA introduced a program called the Commercial COMSEC Endorsement
Program, or CCEP. CCEP was essentially Clipper in a black box, since the
technology was not sufficiently advanced to build lower-cost chips.  Vendors
would join CCEP (with the proper security clearances) and be authorized to
incorporate classified algorithms into communications systems.  NSA had
proposed that they themselves would actually provide the keys to end-users of
such systems.  The new twist is access by key escrow.

To see how little things have changed, consider this quote: "...RSA Data
Security, Inc.  asserts that since CCEP-2 is not published and therefore
cannot be inspected by third parties, the NSA could put a 'trap door' in the
algorithm that would enable the agency to inspect information transmitted by
the private sector.  When contacted, NSA representative Cynthia Beck said that
it was the agency's policy not to comment on such matters."  That was in 1987.
("The Federal Snags in Encryption Technology," Computer and Communications
Decisions, July 1987, pp.  58-60.)

To understand NSA's thinking, and the danger of their policies, consider the
reply of a senior NSA official when he was asked by a reporter for the Wall
Street Journal if NSA, through the CCEP program, could read anyone's
communications: "Technically, if someone bought our device and we made the
keys and made a copy, sure we could listen in. But we have better things to do
with our time." (The Wall Street Journal, March 28, 1988, page 1, column 1, "A
Supersecret Agency Finds Selling Secrecy to Others Isn't Easy," by Bob Davis.)
Another NSA official, in the same Journal story, said "The American Public has
no problem with relying on us to provide the technology that prevents the
unauthorized launch of nuclear weapons. If you trust us to protect against
that, you can trust us to protect private records."  Remember that the Cold
War was still on at that time.  Maybe they're not so busy today.

Law enforcement and intelligence gathering are certainly impeded by the use of
cryptography.  There are certainly legitimate concerns that these interests
have.  But is the current approach really the way to gain support?  People
with a strong military and intelligence bias are making all the decisions.
There seem to be better ways to strike a balance.

AN ALTERNATIVE PROPOSAL

One approach would be to have NIST develop a standard with three levels.  The
first level could specify the use of public-key for key management and
signatures without any key escrow.  There could be a "Level II" compliance
that adds government key escrow to message preparation.  "Level III" could be
key escrow controlled by the user, typically a corporation.  Would this work?
The first level, meeting the standard by itself, would back up the
government's claim that key escrow is voluntary; if I want privacy and
authentication without key escrow, then I can have it, as the government has
claimed I can.  Actions speak louder than words.

Why would any vendors support Level II?  They would find a market in the
government. (I would certainly like our public servants to use key escrow,
just as I want work product paid for by my corporation to be accessible.)  So
the government can still influence the private sector by buying only products
that include Level II compliance. Also, Level II products would be
decontrolled for export. The market can decide; vendors will do what their
customers tell them to.  This satisfies the obvious desire on the part of the
government to influence what happens, as a consumer.

Level III would allow any user to insert escrow keys they control into the
process.  (Level II would not be a prerequisite to Level III.) My company may
want key escrow; I, as an individual, may want to escrow my keys with my
attorney or family members; a standard supporting these functions would be
useful.  I don't necessarily want or need the government involved.

NIST already knows how to write a FIPS that describes software and hardware
implementations, and to certify that implementations are correct.

This approach certainly isn't perfect, but if the administration really
believes what it says and means it, then I submit that this is an improvement
over a single key escrow FIPS foisted on everyone by NSA, and would stand a
much better chance of striking a workable balance between the needs of the
government and the right of individuals to privacy.  Therefore, it RISKS much
less than the current plan.

The real problem with the way NSA works is that we don't find out what they're
really doing and planning for decades, even when they're wrong.  What if they
are?

In the 60's and 70's, the CIA was out of control, and the Congress, after
extensive hearings that detailed some of the abuses of power by the CIA,
finally moved to force more accountability and oversight.  In the 80's and
90's, NSA's activities should be equally scrutinized by a concerned Congress.

------------------------------

Date: Tue, 1 Mar 1994 12:44:05 -0800
From: Mike Crawford <[email protected]>
Subject: Algorithms have unclear boundaries

This is a letter I just sent to the Patent and Trademark office in which I
discourage software patents because the boundaries between components of
software machines are not well defined.

Mail your software patent comments to [email protected].

Mike Crawford           | Author of the Word Services Apple Event Suite.
[email protected] | Free Mac Source Code: ftp sumex-aim.stanford.edu
                       | get /info-mac/dev/src/writeswell-jr-102-c.hqx

Dear Patent Commissioner,

I object to the existence of software patents.  I object for many reasons, but
an argument you may not have heard before is that computer algorithms do not
have clear boundaries.  Thus it is not possible, or not feasible, to prove
that a particular software product has not used some algorithm which has been
patented.  This problem is due to the very nature of software, so I feel that
patents should not apply to software.

It is somewhat difficult to express this concept clearly, but consider this:

A mechanical device, made of metal, has hard surfaces.  There are clearly
defined boundaries between metal parts.  It is clear where one device stops
and another starts in physical space - or at least one can take a machine
apart to determine this.

Thus one can examine a machine for patent violation by taking it apart and
looking at the pieces.

This is not always the case with software.  Computer programs are usually
implemented as some sort of "virtual machine", in which mechanistic concepts
are used to guide the construction of the software, but computer programs have
the peculiar property of having no boundaries.  (Or, if you prefer, soft and
penetrable boundaries.)

This is a problem for software reliability, for example: any line in a
computer program has the capacity to reach out and destroy any other line.
Thus software programs are inherently unreliable. [1] If this could happen in
a mechanical system, a poorly molded windshield wiper might reach into the
engine and throw a piston rod.  This does not happen to mechanical systems,
but it is a daily problem in my work as a programmer.

Now, suppose I devise an algorithm, let us call it A.  The source code that
implements the algorithm looks like this:

A-line-1
A-line-2
A-line-3
A-line-4

and so on.  Suppose also someone has patented an algorithm B.  He might examine
my source and find that his algorithm can be expressed within my source code
as:

A-line-1
B-line-1
A-line-3
B-line-2

Thus I would expose myself to a lawsuit by writing this code.  In this small
example it could be obvious that I have used someone's patented work... but
suppose my program consisted of 100,000 lines of extremely convoluted code:

A-line-1
..
A-line-9 999
B-line-1
A-line-10 001
..
A-line-90 000
B-line-2

In this case it would be very difficult for any normal human being to know
that the algorithm has been used.  Certainly this could not be done within the
time constraints usually placed on commercial software development.  If the
source code were revealed to a team of engineers and attorneys in a lawsuit,
with this engineers solely focused on finding patent violations, this patent
violation could be detected.  Thus any normal commercial development effort is
quite at risk to patent lawsuits.

There are other ways in which this could happen.  I will elaborate if the
Patent and Trademark Office is interested in further information.

References:

[1] The Risks of Software, Scientific American, November 1992.

                                                 [email protected]
Michael D. Crawford, Product Development Manager, Working Software, Inc.
PO Box 1844, Santa Cruz, CA 95061-1844  (408) 423-5696  (408) 423-5699 fax


------------------------------

Date:         Tue, 01 Mar 94 17:03:56 EST
From: Lawrence Kestenbaum <[email protected]>
Subject:      Response from Cambridgeshire Constabulary

From: Lawrence Kestenbaum
     School of Criminal Justice
     Michigan State University
     [email protected]

This is in relation to the recent article in RISKS by Ross Anderson.  He
described the case of John Munden, an English police constable who complained
of unauthorized transactions appearing on his bank statement and (without any
other real evidence) was tried and convicted of ATM fraud.

I was sufficiently outraged to (as Ross suggested) send a letter to the Chief
Constable of the Cambridgeshire Constabulary, expressing my deep concern.

I enclosed a copy of the RISKS report and stated that my opinion was based on
this information.

Here is the reply I received today, dated 18 February 1994:

   Dear Sir,

   I acknowledge receipt of your letter concerning the case of
   Police Constable John Munden who was recently convicted of
   obtaining money by deception from the Halifax Building Society.

   The investigation into the allegations made against Police
   Constable Munden was carried out by members of the Suffolk
   Constabulary, and the prosecution was undertaken by the Crown
   Prosecution Service for the Suffolk Area.  In view of this I
   have forwarded your letter to the Deputy Chief Constable of
   the Suffolk Constabulary for his information and the information
   of the Crown Prosecution Service.

   Any action taken against Police Constable Munden in relation to
   his conviction will be made under the Police (Discipline)
   Regulations 1985, which instruct all Chief Officers on breaches
   of the Discipline Code and the scope of punishments intended
   for such a breach.

   Yours faithfully,

   D. R. Winser
   A/Deputy Chief Constable

------------------------------

Date: 1 Mar 94 16:02:32 GMT (Tue)
From: Peter B Ladkin <[email protected]>
Subject: Re: Threatening E-mail (Hasker, RISKS-15.60)

Rob Hasker wrote in RISKS-15.60, concerning an E-mail threat to the life of
the US President:

> (Local news reports suggest that the student intended this to be a
> practical joke.  As I see it, the risk is in assuming that it doesn't
> really matter what you say by e-mail.)

It may be premature to assign a RISK, since one doesn't yet know any motive.
But there is a prima facie argument against Hasker's suggestion of prime RISK,
which is that if it doesn't matter what's said in email, then one might as
well send it in the clear. And Reincke didn't.

It seems that Reincke intended either (a) to confuse the audit trail, or (b)
to give others the impression he was trying to confuse the audit trail.  If
(a), then it shows the risks involved in using a traceable fake; if (b), maybe
he wanted to get caught and prosecuted (this is a known characteristic of some
of those who engage in criminal activity of various sorts), and succeeded in
doing exactly what he tried to do - in which case there seems to have been no
computer-related RISK at all to him!

Peter Ladkin

------------------------------

Date: 1 Mar 94 16:24:39 GMT (Tue)
From: Dr Peter B Ladkin <[email protected]>
Subject: Bounties for Safety-Critical Software (Chastain, RISKS-15.60)

> Here's the [suggestion]: the government organization which is purchasing the
> safety-critical software publishes the specification, the entire source
> code as delivered by private contractors, and technical documentation
> on the hardware environment.  It then offers a bounty to any party
> anywhere who demonstrates a logical error in the software.

Data exist (there's apparently a JPL study) to support the assertion that in
safety-critical software, many more errors are usually to be found in the
requirements specification itself than in the match between the delivered
software/hardware and the requirements spec. If this is correct, information
delivered to the public according to Chastain's suggestion would need to be
supplemented by information enabling the requirements spec itself to be
critiqued. Documentation on the hardware environment is unlikely to be
sufficient by itself. I have only the usual numerous and incomplete
suggestions as to what the extra needed info could be.

However, there are examples in which Chastain's suggestion, unsupplemented,
could have helped.  Consider the September 1993 A320 accident in Warsaw.
Inspection of the requirements (in this case, simply the info contained in the
Pilot's Operating Handbook) reveals the design that fatally inhibited
deployment of the braking systems after the plane had landed.

Peter Ladkin

------------------------------

Date: ongoing
From: [email protected]
Subject: Info on RISKS (comp.risks), contributions, subscriptions, FTP, etc.

The RISKS Forum is a moderated digest.  Its USENET equivalent is comp.risks.
Undigestifiers are available throughout the Internet, but not from RISKS.

SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup on your system, if possible
and convenient for you.  BITNET folks may use a LISTSERV (e.g., LISTSERV@UGA)
with SUBSCRIBE RISKS or UNSUBSCRIBE RISKS as needed.  Users on US Military
and Government machines should contact <[email protected]> (Dennis
Rears).  UK subscribers please contact <[email protected]>.
Local redistribution services are provided at many other sites as well.
Check FIRST with your local system or netnews wizards.  If that does not
work, send requests to <[email protected]> (not automated).

CONTRIBUTIONS: to [email protected], with appropriate,  substantive Subject:
line, otherwise they may be ignored.  Must be relevant, sound, in good taste,
objective, cogent, coherent, concise, and nonrepetitious.  Diversity is
welcome, but not personal attacks.  PLEASE DO NOT INCLUDE ENTIRE PREVIOUS
MESSAGES in responses to them.  Contributions will not be ACKed; the load is
too great.  **PLEASE** include your name & legitimate Internet FROM: address,
especially from .UUCP and .BITNET folks.  Anonymized mail is not accepted.
ALL CONTRIBUTIONS CONSIDERED AS PERSONAL COMMENTS; USUAL DISCLAIMERS APPLY.
Relevant contributions may appear in the RISKS section of regular issues
of ACM SIGSOFT's SOFTWARE ENGINEERING NOTES, unless you state otherwise.

ARCHIVES: "FTP CRVAX.SRI.COM<CR>login anonymous<CR>YourName<CR> CD RISKS:<CR>
Issue j of volume 15 is in that directory: "GET RISKS-15.j<CR>".  For issues
of earlier volumes, "GET [.i]RISKS-i.j<CR>" (where i=1 to 14, j always TWO
digits) for Vol i Issue j.  Vol i summaries in j=00.  "DIR" (or "DIR [.i]")
lists (sub)directory; "bye<CR>" logs out.  CRVAX.SRI.COM = [128.18.30.65];
<CR>=CarriageReturn; FTPs may differ; UNIX prompts for username, password.
WAIS and [email protected] are alternative repositories.

FAX: ONLY IF YOU CANNOT GET RISKS ON-LINE, you may be interested in receiving
it via fax; phone +1 (818) 225-2800, or fax +1 (818) 225-7203 for info
regarding fax delivery.  PLEASE DO NOT USE THOSE NUMBERS FOR GENERAL
RISKS COMMUNICATIONS; as a last resort you may try phone PGN at
+1 (415) 859-2375 if you cannot E-mail [email protected] .

------------------------------

End of RISKS-FORUM Digest 15.61
************************