VIRUS-L Digest   Thursday, 25 Jan 1990    Volume 3 : Issue 22

Today's Topics:

Re: Internet worm writer to go to trial Jan 16th. (Internet)
Re: Universal virus detection
Disinfectant versions (Mac)
STONED virus in LRS lab at Univ. of Guelph (PC)
"Desktop Fractal Design System Not Infected"
Submission for comp-virus
Jerusalem Virus (PC)!
Trial & Double Standard
Signature Programs
Signature Programs
Signature Programs
Practical a-priori viruscan?
Signature Programs

VIRUS-L is a moderated, digested mail forum for discussing computer
virus issues; comp.virus is a non-digested Usenet counterpart.
Discussions are not limited to any one hardware/software platform -
diversity is welcomed.  Contributions should be relevant, concise,
polite, etc., and sent to [email protected] (that's
LEHIIBM1.BITNET for BITNET folks).  Information on accessing
anti-virus, document, and back-issue archives is distributed
periodically on the list.  Administrative mail (comments, suggestions,
and so forth) should be sent to me at: [email protected].
- Ken van Wyk

---------------------------------------------------------------------------

Date:    25 Jan 90 15:23:08 +0000
From:    [email protected] (Gene Spafford)
Subject: Re: Internet worm writer to go to trial Jan 16th. (Internet)

[email protected] (Gordon D. Wishon) writes:
>Gene, in your report (_The Internet Worm Program:  An Analysis_), you
>speculated that the code may have been written by more than one
>person.  Has anything come out in the trial to support this?

To my knowledge, nothing came out in the trial about this.
I do know, however, that the password cracking code in the Worm was
not written by Mr. Morris.  He obtained that code when he spent a
summer at Bell Labs.  It was part of a security testing package
written by other people, and it appears that he made a copy he had
access to.

I also suspect that he got the code to break "fingerd" from someone
else, but he would have to comment on that.

- --
Gene Spafford
NSF/Purdue/U of Florida  Software Engineering Research Center,
Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004
Internet:  [email protected]   uucp:   ...!{decwrl,gatech,ucbvax}!purdue!spaf

------------------------------

Date:    25 Jan 90 08:21:50 +0000
From:    [email protected] (jc van Winkel)
Subject: Re: Universal virus detection

There are a few points I think that should be made. Let's (for the
sake of argument) assume that the undecidability proof is valid. This
only implies that a virus detector will not be able to 1) catch all
viruses or 2) 'find' a virus in an uninfected program. It does not say
anything about percentages, just that it MAY happen that a wrong
conclusion is drawn. All current viruses can be detected by either
appearance or by behavior. For all current viruses: Once you know its
structure, you can detect it.

(Take a look at the biological analogy: We can detect almost all
biological viruses that are known, but it is IMPOSSIBLE to detect
viruses that have not yet emerged. If a new virus was synthesized,
either by mutation (which happens all the time), or by man (I don't
like the thought) we would only be able to detect it if it shared some
characteristics with known viruses. If the virus is too new, we can
only conclude that someone has cought a desease that is unknown to
man.

Now for my second point: It is also indecidable for a virus to see
whether or not a program is a virus detector. The informal proof runs
along the same path as the informal virus detection proof. In the
virus detection proof, use is made of the fact that the virus 'knows'
that a program is a detector. Yet, this is indecidable, so not all
hope is lost...

There are many more proofs of indecidability, Prof Cohen mentioned
them in his Thesis. They are:

Detection of viruses by appearance.
Detection of viruses by behavior.
Detection of evolution of a known virus.
Detection of viruses by appearance.
Detection of triggering mechanisms by appearance.
Detection of triggering mechanisms by behavior.
Detection of evolution of a known triggering mechanism .
Detection of virus detectors by appearance.
Detection of virus detectors by behavior.
Detection of evolution of a known virus detectors.

Although this theoretical discussion is interesting, I think that with
some measures we can get pretty safe computing! Using a decent file
access control mechanism and not letting anyone write to a file that
is executable, but reserving that right to programs with 'compiler' or
'linker' status will make viruses very hard to write.

These are my own ideas, not necessarily my employer's

------------------------------

Date:    Thu, 25 Jan 90 09:47:00 -0600
From:    "David D. Grisham" <[email protected]>
Subject: Disinfectant versions (Mac)

This was cross posted to: Newsgroups: comp.sys.mac
Subject: Re: Disinfectant 1.6
Summary: No version 1.6
References: <[email protected]> <[email protected]>
Followup-To: Jeff Wiseman's note

This is probably a typo.  However, there is _NO_ version 1.6
of Disinfectant!  If you have one let me or John Norstad know
immediately.
dave
_In art <[email protected]> [email protected] (Jeff Wiseman) writes:
_>In article <[email protected]> [email protected] writes:
_>>   I have enjoyed using Disinfectant in it's earlier versions but
_>>since downloading version 1.6 [including WDEF virus protection], it
                    ^^^^^^^^^^^^^
                    NO Such Version!

_>>refuses to pass ANY of my files.  Any suggestions?  Thanks.
_>Quarantine your Mac??
_>(Sorry Bob, I suppose it isn't funny but I couldn't resist :-)
_>--
_>Jeff Wiseman: ....uunet!tellab5!wiseman OR [email protected]

------------------------------

Date:    Thu, 25 Jan 90 14:01:34 -0500
From:    Peter Jaspers-Fayer <[email protected]>
Subject: STONED virus in LRS lab at Univ. of Guelph (PC)

We have the Stoned virus here (trademark msg "Your PC is now stoned!"
upon boot).  It has appeared in one lab only (so far).  Novell.  They
boot from floppy (strange, I thought "stoned" infected partition
sectors of hard disks, so why msgs upon floppy boot?), and McAfee's
SCAN flags the floppies as being infected in the boot sector. OK, so
I'm wrong.

 Anyhow:

1> Effects of this virus (how dangerous?)?
2> Complete documentation on this virus is available from?
3> Disinfectors available? (pls not SIMTEL, I can never get on SIMTEL20)
4> Helllpp!

/PJ                                                 [email protected]
                    -------------------------------
If you try to please everyone, somebody is not going to like it.

------------------------------

Date:    Thu, 25 Jan 90 10:44:09 -0500
From:    Eric Roskos <[email protected]>
Subject: "Desktop Fractal Design System Not Infected"

This is a follow-up to my 1/12/90 posting in which I observed that I
had bought and used a copy of the Desktop Fractal Design System
allegedly infected with the "1813" virus, but had not seen any
problems.

I have since looked more closely at my executables, which are said to
increase in size as a result of this virus, and have also run the
virus-scan program from SIMTEL20 on the original disk.

My executables do not increase in size, and the original disk does not
show the presence of any virus when checked with this program.  I also
have not seen any abnormal behavior such as was described as being
caused by this virus, while using the system for very heavy software
development during this month.

>From the evidence presented on VIRUS-L to date, it appears that of a
sample size of two, one copy was infected, and one was not.  It would
appear that a larger sample would be helpful in order to understand this
problem.

- --
Eric Roskos ([email protected] or [email protected])

------------------------------

Date:    25 Jan 90 20:41:41 +0000
From:    [email protected] (Mike McCann),
        [email protected] (Mike McCann)
Subject: Submission for comp-virus WDEF at Clemson University

For those of you who have an interest in these things...
 The WDEF Mac virus has reached Clemson University (after UNC Chapel
Hill, where else could it go?).  GateKeeper Aid is working fine for
all those people who chose to take my advise and install it.

Happy virus hunting,
- --
Mike McCann       (803) 656-3714   Internet = [email protected]
Poole Computer Center (Box P-21)     Bitnet = [email protected]
Clemson University
Clemson, S.C. 29634-2803         DISCLAIMER = I speak only for myself.

------------------------------

Date:    Thu, 25 Jan 90 15:59:00 -0400
From:    Michael Greve <[email protected]>
Subject: Jerusalem Virus (PC)!

    We recently discovered the Jerusalem A virus attached to a program
   on a office machine.  I have a couple questions.  1. What does this
   virus do and how does it infect? and 2. How does one go about getting
   rid of the virus.  We have Viruscan, which is how we detected the virus,
   but we have no way of getting rid of PC viruses.  This is our first PC
   virus.   I guess we're in the big time now!!!!

                                       Michael Greve
                                       University of Pa.
                                       Wharton Computing
                                       [email protected]

------------------------------

Date:    25 Jan 90 11:27:10 -0500
From:    Bob Bosen <[email protected]>
Subject: Trial & Double Standard

Now that a jury has determined that Robert Morris Jr. was guilty
of a crime when he wrote and unleashed the famous "Internet
Virus", it is time to consider the implications of the punishment
that society will exact from him. It's also time to examine
society's motives in inflicting and publicizing such punishment.
Obviously, if punishing young Morris can deter others from
attempting similar nefarious activities, society can derive some
benefit. There are those who advocate severe and widely
publicized penalties. On the other hand, there are others who
suggest that a medal of commendation may be more appropriate. It
is unusual to hear such wide-ranging controversy regarding
appropriate punishment after a trial. May I suggest some reasons
for the controversy?

I postulate that the computer world has been attempting to cheat
on the rest of society.  For thousands of years, human society
(in the form of business, banking, and government) has developed
a clear notion of generally accepted practices that defines
prudent management and handling of responsibility. These
standards evolved out of necessity: no single person or
organization can bear the cost of protecting himself from crime
by himself. Trying to anticipate all the kinds of crimes and
trying to get ironclad protection mechanisms into place
beforehand has proven impossible. Therefore, instead of
attempting "perfect" security systems, society has said "We will
share this burden with you if you will follow generally accepted
practices."  Our judicial system and our civil law enforcement
agencies are the result. But these agencies cannot function and
should not protect those who do not accept their portion of the
responsibility. This means abiding by the law and meeting the
requirements of generally accepted practice.

Long before computers and computer crime, businesses learned that
they could not afford to fully protect themselves from their own
employees. It is simply impossible to hire enough guards, to buy
enough alarms, or to build enough walls. Instead, for thousands
of years, business has relied upon the practice of "auditing".
Individual Accountability has become the generally accepted means
by which banks, businesses, and governments have carried out
their responsibilities to themselves, to each other, and to
society.

I postulate that use of a computer does not grant license to
disregard thousands of years of generally accepted practice. But
it is clear that the computing community has attempted to live
outside this norm for the past 30 years.  The Internet virus is
but the most recent spectacular event to illustrate how far
outside the mainstream of business practice our integrated
computer systems have been built.

If young Morris had believed he would be held accountable for his
actions, he probably would not have attempted his crime. But
a key fact revealed in the Morris case proves that the controls
that Cornell University thought were in place to "secure" their
computers could not be relied upon to hold users accountable for
their actions. Computer records controlled by Morris contained
the user names and passwords used by hundreds of other users.
Whoever obtained these could have successfully masqueraded under
any of the corresponding identities.  This lets all users of
Cornell systems off the hook. This is best illustrated by the
following example:

  Suppose you are a graduate student of computer science at
  Cornell. You would never think of committing a computer crime,
  and have never done so.  You are careful with your handling of
  your passwords, and you change them every month without
  exception. One day, you are summoned to the office of campus
  security, and upon your arrival you notice that the head of Data
  Processing is present, as well as the university's attorney. They
  inform you that several irregularities have been occuring on the
  computer systems you use, and that your user name and password
  have appeared in audits directly associated with these events.
  Perhaps criminal activities have taken place. Your academic
  standing, career, and good name are at stake. You have perhaps 5
  minutes to persuade these people of your innocence or you may
  find yourself in court.

  What do you say? You could proceed along these lines:

  "I didn't do these things. I know I didn't do these things, and
  if you'll think for a minute about the computers and networks you
  force me to use and the way my password traverses them, you'll
  see that there is no way you can hold me accountable for these
  problems. The personal computers I use are not mine, and are not
  under my control. The PC maintenance people and DOS gurus that
  frequent this campus could easily have trapped my passwords at
  any of several levels where they must traverse equipment and
  software that is under THEIR control, not mine. It's YOUR Local
  Area Network, not mine. It's YOUR minicomputers, YOUR data
  communication controllers, YOUR multiplexers, YOUR Wide Area
  Netoworks, YOUR UNIX gurus, YOUR VTAM people, YOUR MVS experts,
  and YOUR programmers who determine what happens to my password. I
  am given no opportunity to protect my password once it leaves my
  fingertips. Robert Morris was able to collect hundreds of
  passwords from users all around this campus, and this proves that
  the problems you are attempting to pin on me could have been
  caused by any of hundreds of different people. You can't pin this
  on me!" And of course, you'd be right.

The problem is that memorized passwords can not be relied upon to
hold users accountable for their actions in todays's environment
of Integrated Systems. There are too many places in our
networked, integrated environment where insiders are routinely
exposed to passwords. Indeed, it is not unusual for tens of
thousands of copies of passwords to be made, retained, and
broadcast during each month of routine use. I am NOT saying that
a large proportion of these exposures are actually exploited. I
AM saying that the mere PROBABILITY of password exposure
eliminates the POSSIBILITY of user ACCOUNTABILITY. Why is this
situation tolerated in the computing community when equivalent
situations are considered criminally negligent in the practices
of the rest of society?

Why don't bankers abandon the use of credit cards, photo IDs and
signatures and just debit our bank accounts whenever a merchant
tells them our passwords? It would be a lot easier. Imagine
getting a letter from your bank along these lines:

  Dear esteemed depositor:

  As you know, for the past 15 years, you have been entrusted with
  our bank card, and have used it in your banking transactions. We
  are replacing your bank card with a password. You will no longer
  have to carry your bank card. Your new password is "FRED". Please
  keep it secret. Whenever you want to withdraw funds or make
  credit card purchases, just write FRED at the bottom of the
  invoice and we'll take care of the rest. If you ever suspect
  that anybody has found out your password, please drop drop us a
  post card with "FRED" crossed out in red pen and a new password
  of your choice written in blue ink. It is your responsibility to
  keep your password secret. You will be held accountable for any
  and all banking transactions that say FRED on them, including
  questionable or illegal transactions, for which you will be
  prosecuted to the full extent of the law.

Computer professionals: does this sound painfully familiar? Why
does this sound so silly in a banking or business context when it
is so universally accepted if a computer is involved? Who is
responsible for the perpetration of this double standard? Is it
possible that the punishment of Robert Morris helps us all feel a
little safer from the accusation that a large portion of the
blame should rest with us?

Bob Bosen
Enigma Logic

------------------------------

Date:    25 Jan 90 11:23:16 -0500
From:    Bob Bosen <[email protected]>
Subject: Signature Programs

When I began this in-depth series of discussions on authentication
algorithms and signature programs late last year, I was alarmed and
frustrated by the lack of attention being paid to the subject of well-
researched "standard" authentication algorithms.

At this point I must say I am gratified by the response. We've heard from
Ralph Merkle, Bill Murray, Jim Bidzos, Ross Greenberg, Y. Radai and
others directly, and we've heard indirectly from Fred Cohen, Robert
Jueneman, and Prof. Rabin. Obviously there are a lot of divergent
interests and opinions represented here, but among all the disagreement I
see emergent patterns that I consider very healthy. First, it is clear
that the preponderance of expert opinion now favors increased use of
algorithms that are more sophisticated than what I called "some
programmer's guess" at an authentication algorithm. Second, it is clear
that there are ways of "leveraging" the performance of these
sophisticated authentication algorithms. As I pointed out in my original
posting and as Jim Bidzos has further stressed, slow performance need not
be a problem because it is not always necessary to apply the slower
algorithms directly to the entire file in order to obtain a sophisticated
signature: It is possible to combine two or more well-understood
algorithms in order to obtain the advantages of each and the detriments
of neither. Third, it is clear that the use of sophisticated algorithms
allows functions and features, such as those suggested by Bill Murray
(use of a MAC to ensure that programs are received as they were shipped)
that would otherwise be impractical or untrustworthy.

Thanks to all that have supported the need for using sophisticated
authentication algorithms!

- -Bob Bosen-
Enigma Logic Inc.

------------------------------

Date:    25 Jan 90 11:24:23 -0500
From:    Bob Bosen <[email protected]>
Subject: Signature Programs

In his posting of Jan 4 '90, Y. Radai acknowledges that the added
sophistication of X9.9 compared to CRC may be well worth the added time
in the case of authentication of bank transfers or other conventional
applications, and then asks me if I have ever considered the possibility
that this sophistication might be wasted when dealing with viruses. Yes.
Of course I have considered this possiblity. I considered it long and
hard. I was forced to reach the same conclusion that bankers and
businessmen reached when they insisted that sophisticated means be
developed to protect their business transactions: Not all programs are
video games. Some programs are important. Some programs, if attacked by a
malicious virus or trojan horse, could pervert banking transactions or
business balances.  Some programs, if attacked, could place human lives at
risk. We are not just talking about reformatting and restoring a hard disk
here. I am convinced that businesses and governments need protection that
is capable of resisting the attacks of a sophisticated insider who
specifically targets high-value operations.

Furthermore, I would like to postulate that the day may come when
somebody in the anti-virus community will produce a really good defense
mechanism that is practical, reliable, sensibly priced, and really worthy
of widespread market acceptance. Perhaps two or three such excellent
programs will emerge and distinguish themselves as the clear leaders.
Each such program could eventually be installed on millions, or even tens
of millions, of future computers. Let's hope it happens some day. And
let's hope no virus writer is able to target one such market leader and
forge signatures!  Obviously in such a situation with millions of users,
a protection mechanism would make a tempting target for skilled virus
writers or trojan horse writers. In such a situation, it is entirely
possible that criminals might NOT launch a widespread attack designed to
spread to a large population. (That would reveal their skill and deprive
them of the opportunity to profit from it.) They might instead confine
the spread of their virus to a very specific population of familiar
computers known to control great value.

For these and other reasons, I must disagree with the opinions that Y.
Radai enumerated in his posting and upon which he based his latest set of
conclusions. Specifically, in his opinion (1), Mr. Radai says that a
virus must perform all its work ... "on the computer which it's attacking
and in a very short time". That is not necessarily true. In networked
environments with shared file systems, (and especially if remote
execution is available), viruses could execute on different computers and
take as much time as they needed. Also, as pointed out by Bill Murray,
viral infections during the process of software distribution may be done
off-line at the convenience of the attackers. And it is not necessary for
a virus to SUCCEED in performing all its work in a single very short
attempt. A virus might divide its clandestine attempts into very small
chunks that are attempted frequently enough to guarantee eventual
success, but which do not result in any pollution of off-line storage
unless defense mechanisms (presumably marginally sophisticated ones of
the type Mr. Radai hopes will be sufficient) are successfully bypassed.

I must also disagree with Mr. Radai's opinion (2), wherein he posits "By
its very nature, a virus is designed to attack a large percentage of the
users of a particular system." Why? What's to prevent a virus writer from
launching a "surgical strike" against a small population of familiar
computers that are known to control assets or information of great value?
Once again, I think Mr. Radai's view of the world does not reflect the
realities of business or criminal nature. To be sure, most of the viruses
we've seen so far have behaved like little PAC-MAN games, gobbling up
everything in sight. But how long will it be before this video-arcade
mentality is outgrown?

As to Mr. Radai's opinion (3), he says that "a virus writer is not in a
position to know what checksum algorithms may be in use on the computers
on which his virus is unleashed." That's true TODAY. In fact, TODAY, it's
even worse than that. Most virus writers can safely assume that there is
NO protection of any kind on the target computers. But if our society is
ever going to overcome its current vulnerability, we'll need reliable,
low-cost defense mechanisms that are worthy of widespread use. This
implies a necessity for economies of scale. Therefore, this opinion (3)
will not necessarily be true for very long. Let's HOPE that when we get
to that point, the authentication algorithms used are more sophisticated
than simple checksums!

- -Bob Bosen-
Enigma Logic

------------------------------

Date:    25 Jan 90 11:24:38 -0500
From:    Bob Bosen <[email protected]>
Subject: Signature Programs

Although I disagree with the opinions expressed at the beginning of Mr.
Radai's posting of Jan. 4, 1990, I find his analysis of the trade-offs
between algorithmic sophistication and performance useful. From what I've
read in this forum of late, it does appear that Ross Greenberg and Y.
Radai are at one end of this spectrum and that Bill Murray, Ralph Merkle,
Jim Bidzos, Fred Cohen, and the others mentioned in Mr. Radai's Jan. 4
posting are more or less at the other end with me. (If I've
misrepresented your views here, gentlemen, I hope you'll forgive and
correct me for it. I'm reading between the lines.)


- -Bob Bosen-
Enigma Logic


------------------------------

Date:    Thu, 25 Jan 90 16:47:50 -0400
From:    GEORGE SVETLICHNY <[email protected]>
Subject: Practical a-priori viruscan?

In Virus-L v3 issue20, [email protected] (Russell McFatter) writes:
>...<deleted>
>
>All things considered, we can actually write a program that, given
>a questionable bit of code, can give one of the following results:
>
>OK:  this program is safe and will not infect other applications.
>BAD:  the target program could, under some unknown circumstances,
>      modify other applications.
>INCONCLUSIVE:  The target program either modifies executable code or
>     executes variable data.
>
><deleted>...

The problem of viruses (or more generally, nasties) is mostly program
semantics (what programs do) and human intent and only secondarily
program syntax (how programs are written).  Syntactic problems are
decidable, semantic problems generally are not (in Turing machines, in
finite-state machines they generally are but the decision procedures
are very rarely practical). What Russel suggests is that there is a
useful practical approximate syntactic solution to the semantic and
intentional problem of nasties. I seriously doubt this.

Up to now nasties have been a bit flamboyant and show-offish. Their
more subdued versions would be identical to normal bugs. The same code
in a spreadsheet program that causes it to erroneously recalculate
every now and then is either a bug if the programmer did not intend it
or a nasty if it was put in on purpose. Surely the "O.K." category
doesn't mean "bug free" (wouldn't it be wonderful otherwise?). To be
100% OK the category can only include those programs that can be
proved to be correct on the object-code level and this means
practically no program at all.

The third category is grossly underestimated. One need not write to
the code segment or execute from the data segment to create a nasty
that doesn't fall in the second category. There are serious dangers
within the processor instruction set. Consider an unconditional jump
to an address given by a register or memory content, extremely useful
in dealing with jump tables. Now, from the purely syntactic point of
view you can't tell where you are jumping to. You can jump to a nasty
piece of code. This can be in a portion of the program that
syntactically is identified as being constant data: message strings,
global program constants, etc.. It can also be in a piece of
syntactically identified and innocuous-looking code *displaced* by a
byte or two. Clarifying: Suppose one has somewhere in the code a
conditional jump, a "jmp nz" instruction. The syntactic analyzer looks
down both branches and finds nothing suspicious. Now the z branch is
dummy since the programmer made sure this condition never holds
(semantics). The dummy branch looks innocuous to the syntactic
analyzer but the same byte sequence *starting from the second byte* is
a nasty piece of business. One enters this nasty code by a "jmp reg"
instruction in some other part of the program.  Wolf semantics in
sheep sytax. Very well, you say, let's put "jmp reg" instructions on
the suspect list, however the "jmp reg" instuction is equivalent to
"push reg" followed by "ret". Hence all code that uses "push" and
"ret" is suspect, and this includes practically all the useful
software under the sun. What all this means is that an a-priori scan
for nasties *has* to be smart enough to anlyze the consequences of
"jmp reg" instuctions and their kin, to see that stack discipline is
maintained, to analyze what gets pushed and poped and when this can
become a code adress, etc. A lot of semantics to approximate by
syntax.

There is a biological analog to the "second byte" situation above.
Some genes overlap with others, that is, a base-pair sequence
ABC.DEF.G...  codes for one protein (a triple of base pairs is a codon
for an amino acid) while the *same* but phase-shifted sequence
BCD.EFG.H.... codes for another protein and both are actually produced
by the organism. It's rather remarkable that such "gene multiplexing"
can produce two useful proteins. In machine language code one doesn't
see such code multiplexing since it must be practically impossible to
multiplex two useful code sequences this way, but one can easily
multiplex "silly" with "nasty" code and use the silly to camouflage
the nasty. (Detecting silliness is another semantic problem.)

The "O.K." category is thus practically empty, the virus hackers will
make sure that their creations don't fall in the "BAD" category by
carefull programming, and the "INCONCLUSIVE" category must be enlarged
to include nasty "semantically-driven" jumps and code-multiplexing as
a possibility which makes it contain practically all useful programs.
This is hardly a practical solution.

The upshot of this is that unless one changes to radically diferent
memory and processor architectures (hard-wired separation of code and
data memory, rigid code boundaries, fixed-instruction-length
processors, separate stack for "call" and "ret", explicit jump-table
handling ... ) there is not much hope for an effective a-priori
scanner for nasties. One will have to be content with a-posteriori
scanners for known nasties and watchdog programs that report on
suspicious activity (semantics) rather than try to detect suspicious
structure (syntax). Beyond this there are of course the people
problems that have to be dealt with by education, law, politics,
psychology, etc.

----------------------------------------------------------------------
George Svetlichny                 |  Multiplexed sentences:
Department of Mathematics         |
Pontificia Universidade Catolica  |  What sort of feet are moldy?
Rio de Janeiro, Brasil            |   Hats or tofee? Tar 'em, oldy!
                                  |
[email protected]              |
----------------------------------------------------------------------

------------------------------

Date:    25 Jan 90 15:14:57 -0500
From:    Bob Bosen <[email protected]>
Subject: Signature Programs

In reading Ross Greenburg's recent comments on Signature Programs, and in
trying to respond to his specific statements for me, it appears that he
must have missed my original posting in which I explained ways by which
it is possible to extract excellent performance from authentication means
based on combinations of ANSI X9.9, ISO 8731-2, and conventional CRC
techniques.  (Jim Bidzos has recently described a similar technique which
includes RSA authentication.) For the benefit of others who might have
also missed that background, I'll repeat a brief summary here.

It is possible to greatly leverage the performance of sophisticated
authentication algorithms by carefully controlling certain factors. Among
them are:

1- The PERCENTAGE of the file that is subjected to the sophisticated
algorithm. This can sometimes be quite a small fraction of the whole
file.  (The remainder of the file can be processed by an industry-
standard CRC algorithm. There are various techniques deriving from
cryptology that can be used to cause the effects of the sophisticated
algorithms to "ripple through" all the way to the final signature.)
Properly implemented, these techniques can result in a reliable,
virtually unforgeable signature that is calculated almost as quickly as a
conventional CRC.

2- WHEN the signature is calculated. Obviously you can infuriate your
users if you make them stand around twiddling their thumbs while all your
files are authenticated in batch mode during the bootstrap process. On
the other hand, if most authentication is done "on the fly" as programs
are loaded, users hardly notice the delays.

3- How OFTEN the signatures are calculated. It really isn't necessary to
recalculate each and every signature every day, or even every time a
program is executed. Sensible authentication frequencies will depend on
the work environment, presence of known threats, and the value of assets
controlled, but may average once or twice a month in typical business
environments.

4- The ALGORITHM chosen. Although its strength is not as well researched
as DES, ISO 8731-2 has withstood at least some respectable public
scrutiny, and runs at least ten times as fast as DES. Early indications
are that SNEFRU is a very strong algorithm that is much faster than DES.
RSA is much slower than DES. (And as I've consistently said since my
earliest posting, CRCs of varying strengths are available and can be used
in appropriate combinations with some of the more sophisticated algorithms
to speed things up still further.)

By judiciously balancing these variables, it is possible to create a
fast, reliable, sophisticated system that performs so quickly that users
hardly notice it. I have to agree with Ross Greenberg that a
sophisticated algorithm that performs poorly won't get used at all, and
is therefore worse than an unsophisticated algorithm. But I also know,
from direct, first-hand experience, that we need not limit ourselves to
thinking of sophisticated algorithms as being slow performers. All things
considered, there is really no reason NOT to abandon the simplistic
algorithms in favor of those that are likely to be beyond compromise by
virus writers for several years to come.

- -Bob Bosen-
Enigma Logic

------------------------------

End of VIRUS-L Digest
*********************
Downloaded From P-80 International Information Systems 304-744-2253