Aucbvax.1813
fa.info-micro
utzoo!duke!decvax!ucbvax!CSTACY@MIT-AI
Thu Jun 18 14:10:20 1981
INFO-MICRO Digest V3 #50
INFO-MICRO AM Digest Thursday, 18 June 1981 Volume 3 : Issue 50
Todays's Topics:
Microcomputer Architecture - Comparing Machines
----------------------------------------------------------------------
Date: 14 May 1981 1035-EDT
From: KELLY at RUTGERS
Re: A note on comparing various microprocessors
Dear INFO-MICRO readers:
The following is a long comment that has been circulating for several
days among the Rutgers micro crowd, and they (from what they've told
me)feel it might be an interesting enough new wrinkle on an old
problem to send it on to INFO-MICRO for more reaction. So for what
it's worth, here's the result of one engineer's insomnia .... replies
to KELLY at RUTGERS if they are in the nature of value judgements on
the concept, otherwise helpful suggestions to INFO-MICRO@AI.
Van Kelly
KELLY @ RUTGERS
In recent (4 weeks ago) discussions on INFO-MICRO, the 6502
crusaders, the Z80 holy-warriors, and the 6800 kamikazes began a
potentially fruitful discussion of the advantages and disadvantages
of the various microprocessor chips on the market. Over several
days, the discussion sank slowly down into a quagmire of subjectivity
and expired. I propose that the subject be resurrected, but
generalized into a more abstract and quantitative discussion of the
procedures by which microprocessors are to be judged.
Lest you misunderstand, I have indeed heard of benchmarks, but have
not in the past found them sufficiently comprehensive, unless made
extremely application-specific. What I am asking is:
a. What sorts of things does one look for in a "good" microprocessor?
(no, this is not an obvious question at all,
beyond the initial generalities)
b. Is there such a thing as a "typical" spectrum of applications
for a given chip, or class of chips, from which benchmark
tests should be drawn?
c. When does one say that it is NOT reasonable to compare two
particular chips? When does bus width , or instruction set
differences, cause two a comparison of two processors to
become "unfair". (As an example, I cite a remark on INFO-MICRO
to the effect that 6809 and 8088 should not be comparatively
benchmarked, in spite of the fact that they are regularly used
in similar applications, have "low" cost, are intended by their
manufacturers as incremental upgrades of earlier, mutually
comparable chips [6802 and 8085], and are externally compatible
with 8-bit data busses.)
d. How can one reasonably (and somewhat objectively) WEIGHT the
features of microprocessors in coming to a comparative value
judgement in the context of a specific class of applications?
(would that some of our most ardent chip proponents could do
some hard introspection at this point!).
e. What sorts of meaningful evaluation tools are already available
for judging the merits of micros? What lessons should we learn
from the rather "parochial" process of benchmarking that
was transplanted wholesale from the mainframe/mini domain?
The following pages contain some (more and less organized) notes I
came up with while trying to answer some of the above questions. If
you have no real interest in chip comparisons or the general topic of
benchmarking, just ^O right here because there's plenty more to
come....
A. WHAT ARE THE FEATURES THAT MAKE A "GOOD" MICROPROCESSOR, BOTH IN
AND OUT OF CONTEXT, WITH NO ATTEMPT AT WEIGHTING THEM.
1. Ease of integrating CPU with support hardware
a. Comparative chip count/cost to implement "typical" system.
b. Conceptual/electrical simplicity of bus structure and timing.
c. Availability of a variety of support system "building
blocks".
e. "Technological slack"- i.e. how tolerant is the chip
of designers who try to do questionable things with it.
(e.g. marginal overload of bus-driving circuits,
pushing the published timing specs to the limit,
less-than-clean PC layouts, poorly regulated/bypassed
power supplies [hooray CMOS], field-induced ESD,
thermal stress, etc.)
f. Special off-bus features that enhance system performance
(e.g. on-chip I/O of MC6801/3-TI9900-8748 chips).
g. Hardware support of real-time response to outside world
(good interrupt structures, timers)
h. Are there specific hooks for architectural upgrading (future
co-processors, multi-master bus support, abortable bus cycles,
etc.)? Are there unnecessary hindrances in the architecture to
future upgrading (the Z-80 7-bit refresh counter vs.
some 64K rams comes to mind), or to the development of
an upward/downward compatible line of products (e.g.
6809 subsumes the 6802 at the assembler source level
but the jump from 6809 to 68000 is a quantum leap)
2. Object Code Compactness
a. A realistic static-frequency-based scheme of expanding
opcodes/address-modes. (e.g. 8086 or 6809).
b. Suitable choice of programming primitives for "typical"
compiler code generator output. (e.g. several modes
of stack-offset addressing for ALGOL-like languages,
fast indirection operators, and some mercy for LISP
people).
3. Speed of Execution referred to a CONSTANT memory access time.
Enough already of comparing Apples and Oranges (no pun
intended)! It is bus bandwidth, not CPU clock frquency,
that must be factored out of speed comparisons of various
CPU's. Within this constraint, let each microprocessor be
judged in its most favorable light; i.e. if you are comparing
Z-80 with 6502 (a pox on both your houses) for a 500 ns
memory access, either use a 2.? MHz Z-80 and a 1 MHz 6502,
or else a 6MHz Z-80B and a 6512C, with their clocks and
wait-states tweaked.
a. Speed-optimization of loop-control primitives (e.g.
decrement-register-and-branch-if-zero as a fast
instruction).
b. Rapid calculation of interrupt vectors.
c. Number and generality of registers/scratchpads to allow
application of traditional automatic local optimization
techniques.
d. Good hooks for bringing up fast frequently-used operating-
system primitives (e.g. I/O block moves, semaphores, scheduler
primitives and inter-process communications).
e. Saturation of bus timing cycle - i.e. what fraction of
real-time does the CPU actually do something useful on
the bus? Is there a sensible fetch-ahead mechanism?
4. Ease of Programming -- Programmer Productivity.
a. Will I [have to/want to] use an assembler? If not,
AND if a good compiled language is available AND if the
overhead of the compiled code is not excessive (see 2.b
above) THEN ease-of-programming is not really as dependent
on the CPU as it is on the HLL I am using (i.e. the quality
of the available compiler will be weighted heavily in any
judgement of the CPU).
b. How orthogonal are the addressing modes to the opcodes?
(e.g. are certain opcodes "arbitrarily" excluded from utilizing
certain opcodes.)
c. How closely does the machine adhere to the 0|1|(2)|infinity
rule in providing programming "features"? (e.g. either
give the programmer NO accumulator to play with, or else
ONE accumulator, or else TWO INTERCHANGEABLE accumulators,
or else make everything in sight an accumulator). This
relates to the ease of constructing general-purpose
optimizing protocols for high-level languages, as well as
the learning curve for new assembly programmers.
d. How close to being absolutely general-purpose are the
registers? Is there one register that is a bottleneck?
(e.g. the 6800 X register, before PUSH X and PULL X were
implemented on the 6801; or the COSMAC accumulator)
e. How many lines of assembler code are required to implement
"typical" lines of HLL code (ye olde benchmark)? Are many
of these cases amenable to condensation with a FEW
general-purpose macros?
f. Does the assembler exploit special cases, or must the
programmer explicitly spell everything out (with extensive
use of conditional macros, etc.)?
6. The GOTCHAS true Hackers don't want to think about
a. Who manufactures it? Is there at least one Japanese source,
as well as American/European multiple-sources?
Does at least one of the sources know how to talk to
customers knowledgeably?
b. How long has the processor been manufactured? Is the volume
demand likely to continue long enough for the manufacturers
to contine optimizing their manufacturing processes to reduce
costs and increase yields?
c. How large is the USEFUL software base? (There are a lot of
TRS-DOS- and CP/M-based software gems for Z80 that are
useless for either a disk-less environment or a multi-
tasking world).
d. Is there a large enough labor pool of knowledgeable
consultants/app.-eng. types to help with sneaky problems
that WILL arise from time to time.
e. What test equipment is available (and at what price) for
diagnostic work with this processor (ICE, ANALYZERS, etc).
B. WHAT SORT OF A TAXONOMY OF "TYPICAL" APPLICATIONS MIGHT BE CONSIDERED
IN DEVELOPING BENCHMARKS AND WEIGHTING FUNCTIONS FOR A PARTICULAR
"CLASS" OF MICROPROCESSORS?
1. High-production-volume (50K+/yr) dedicated machine controllers.
(stresses I/O, reliability, low asymptotic price.
de-emphasizes ease of software development, etc.)
2. Controller for a (non-user-programmable) smart terminal.
3. A packet-switching front-end or transmission node for a
telecommunications network, or a music synthesizer.
(speed and real-time interrupt response, extreme physical
ruggedness).
4. A single-user "personal" computer, whatever that is.
(ease of software development, product maturity, software
base, speed).
5. A uni-processor for a small-business floppy-based system with
2-4 terminals, one printer, and either 64K OR >>64K of
available memory. (i.e. a direct replacement in a typical
minicomputer standalone application).
6. As one in a network of 16 processors communicating via a common
memory, but each having significant private memory on its own
card (i.e. invading the province of the mainframe .)
C. WHEN NOT TO COMPARE TWO MICROS.
If the spectrum of applications for two micros shows insufficient
overlap, then comparison is probably meaningless. Otherwise, I can't
see one single reason for not comparing them, providing a set of
benchmarks is used that is within the intersection of their
applications spectra (fuzzy sets allowed). I see no reason to bar
comparison on any other grounds, but I am willing to hear any logical
arguments to the contrary.
D. HOW TO APPLY WEIGHTING FUNCTIONS: A INAUSPICIOUS BEGINNING
Weighting must be done on several levels in the evaluation process:
1. First, weighting must be used on the various test data that are
used to compute the magnitude of individual microprocessor
"features".
2. Second, within each single "application" the features must be
weighted to form an "overall" in-context rating.
3. Third, the in-context ratings of various applications must be
combined into an overall weighting.
I doubt that a suitable weighting method will be linear in practice.
I wonder if weighted averages may be summed the usual way with either
a one-dimensional or Euclidean metric?
------------------------------
-----------------------------------------------------------------
gopher://quux.org/ conversion by John Goerzen <
[email protected]>
of
http://communication.ucsd.edu/A-News/
This Usenet Oldnews Archive
article may be copied and distributed freely, provided:
1. There is no money collected for the text(s) of the articles.
2. The following notice remains appended to each copy:
The Usenet Oldnews Archive: Compilation Copyright (C) 1981, 1996
Bruce Jones, Henry Spencer, David Wiseman.