Eric -

I'm really looking forward to your class. I'm interested in what
other folks are doing and anything I'm doing that is useless.  Here
are some of the things I do in an effort to improve system
performance.  FYI, I'm running a 2000M with a Dimension-040 CPU,
AMOS/32 1.0D(165)-5 (hopefully upgrading to 2.2C this summer), 32mb
memory, 2 APU'I's, 1 355, 3 SCSI-2 ~1Gb drives, 1 145mb Micropolis,
ANDI, d/Basic, d/Vue, d/Spool, AlphaFax, PolyTrack, INMEMO, EZSPOOL,
& SuperVue.  The business I work for handles medical & dental
insurance for self-insured companies.

Trick 1 - specialize the use of those disks.  In general, I use one
disk for key files and another for data files.  My programs, in
general, live on the system disk.  Most printouts live on the old
145MB disk.  In an ideal situation, the program is reading a key file
on one disk, reading the data file on another disk, and outputting a
report on a third disk.  The whole idea is to minimize head movement.
Of course with multiple users accessing the disks at the same time,
the best laid plans fall apart, but I'm still betting on better
performance overall because of where the various types of files are
stored.

Trick 2 - minimize the search path.  Most of my RUN's live in BAS:.
In my menus, I explicitly give the program location (BAS:) when
running a program.

Trick 3 - use small portions of the disks.  Although I've got 3+Gb of
disk space, I only use about 650Mb.  Each disk has 32 logicals.  I
(for now) blow off 2/3's of those, only actually using 8 - 10
logicals on each.  The idea here is if access is fast on a 1 Gb
drive, it ought to be faster if restricted to a small portion of the
disk.

Trick 4 - use separate bitmaps for each logical.

Trick 5 - pump up the disk cache.  Over 14mb of my memory is used by
SuperDisk.  I keep statistics of reads, writes, and cache hits
(captured at time of backup) and try to allocate the cache memory
between the disks based on activity (plus a lot of guessing).
Currently, I'm averaging over 98% cache hits on my system disk, over
91% on the primarily key file disk, over 88% on the data file disk,
and over 98% on the print file disk.  I'll probably acquire another
16mb of memory this year and give most of that to SuperDisk.

Trick 6 - optimize with SuperDisk.  You can adjust the read-aheads
for each logical from 0 to 9 blocks.  I use the statistics captured
nightly to optimize this.  The most use of the statistics is to see
where the actual disk activity is.  I try to cluster the files such
that the primary disk activity is on one particular logical, the two
logicals beside it have less, those next out have even less, etc.  I
concentrate on misses and writes, NOT on hits (since those are the
only times there is head movement). I've done a better job of this in
the past, but current statistics follow:

(My users are typically sitting on DSK4: when they run programs.)
                        DSK Statistics, Misses & Writes
    Percentage of Total  ---- Average ----   --Percentage of Hits--
Disk   Misses    Writes     Misses   Writes    Cache  Rd Ahd   Total
 0   13.799%    6.552%      3,598    4,547   58.60%  40.59%  99.19%
 1    0.038%    0.000%         10        0   48.05%  19.23%  67.28%
 2    1.642%    0.218%        428      151   31.39%  54.89%  86.28%
 3   30.006%    1.309%      7,824      909   29.82%  48.92%  78.74%
 4   37.543%   79.500%      9,789   55,171   33.88%  65.13%  99.00%
 5   14.544%    8.276%      3,792    5,743   62.70%  25.60%  88.30%
 6    0.644%    0.260%        168      181   26.71%  69.67%  96.37%
 7    0.052%    0.012%         14        8   21.91%  70.70%  92.60%
 8    1.248%    3.221%        326    2,235   16.81%  78.51%  95.32%
 9    0.483%    0.652%        126      453   25.83%  69.15%  94.97%

    100.000%  100.000%     26,073   69,398   41.52%  56.75%  98.27%

(Primarily key files are on this disk.  Accounting is also here.)
                        PSI Statistics, Misses & Writes
    Percentage of Total  ---- Average ----   --Percentage of Hits--
Disk   Misses    Writes     Misses   Writes    Cache  Rd Ahd   Total
11    1.263%    8.885%        388    7,113   29.98%  56.93%  86.90%
12    3.205%   12.004%        985    9,610   46.31%  48.79%  95.10%
13   24.651%   10.044%      7,576    8,041   71.72%   5.98%  77.69%
14   40.994%   38.319%     12,598   30,678   73.39%  12.04%  85.42%
15    8.918%    1.545%      2,741    1,237   75.30%   9.49%  84.79%
16    6.545%    9.961%      2,012    7,974   64.76%  30.47%  95.23%
17    9.465%    7.167%      2,909    5,738   80.31%  12.95%  93.26%
18    4.945%   12.075%      1,520    9,667   35.07%  63.63%  98.69%
19    0.005%    0.000%          2        0   26.25%  44.40%  70.66%
20    0.009%    0.000%          3        0   33.87%  30.91%  64.78%

    100.000%  100.000%     30,732   80,058   59.01%  32.52%  91.54%

(Data files + program source files are on this disk.)
                        ILA Statistics, Misses & Writes
    Percentage of Total  ---- Average ----   --Percentage of Hits--
Disk   Misses    Writes     Misses   Writes    Cache  Rd Ahd   Total
21    1.361%    9.710%        386      517   78.65%  14.90%  93.54%
22    3.246%    0.338%        921       18   38.53%  49.19%  87.73%
23   16.784%    0.360%      4,761       19   21.60%  64.34%  85.94%
24   69.661%   52.871%     19,762    2,813   21.32%  66.99%  88.31%
25    8.361%   14.212%      2,372      756   24.98%  66.34%  91.32%
26    0.473%   15.992%        134      851    8.33%  86.54%  94.87%
27    0.023%    2.966%          7      158   32.30%  66.28%  98.58%
28    0.054%    3.097%         15      165    5.42%  86.35%  91.77%
29    0.009%    0.339%          2       18   46.30%  46.90%  93.20%
30    0.022%    0.115%          6        6   10.81%  77.89%  88.71%
31    0.006%    0.001%          2        0   28.01%  43.62%  71.63%

    100.000%  100.000%     28,369    5,321   23.55%  64.97%  88.52%

(Print files are here.)
                        OLD Statistics, Misses & Writes
    Percentage of Total  ---- Average ----   --Percentage of Hits--
Disk  Misses    Writes     Misses   Writes    Cache  Rd Ahd   Total
 0    0.126%    0.000%          4        0   17.98%  59.98%  77.96%
 1    0.292%    0.464%          9      142   24.69%  73.15%  97.84%
 2    0.126%    0.000%          4        0   24.67%  44.28%  68.95%
 3    0.132%    0.000%          4        0   31.90%  31.16%  63.06%
 4   99.324%   99.536%      3,179   30,480   42.69%  56.05%  98.74%
    100.000%  100.000%      3,200   30,622   42.66%  56.08%  98.73%

Trick 7 - use the task manager for background reports.  This doesn't
increase system throughput (as far as I know), but it lets the user
ask for a set of reports which take from 1 minute to 10 minutes each
to run.  The user isn't tied up waiting for one report to finish
before asking for the next.  The users are able to do other things
while the task manager is handling the reports.

Trick 8 - have more than one task manager queue.  Employees in the
marketing department have a set of reports to run (or have the task
manager run) two months before a group's renewal date.  They dislike
having to wait for the reports to run when other folks have tied up
the task manager.  I gave them their own queue, so they only have to
contend with each other's requests.  Again, this doesn't speed up
system performance, but those three users do get their reporting
quicker than otherwise.

Trick 9 - do off-hour reporting, etc.  There are many things we do
daily, weekly, monthly, and quarterly.  Many of these are reports.
Some take over an hour to run before printing.  We set the task
manager up to do these at night or on the weekends.  Not only does no
one have to remember to run the programs, but they run when few
people are around thus increasing system performance during the day
(when they would have otherwise run).  We currently have 33 permanent
tasks set up.

Trick 10 - numerous printers, dedicated if possible.  This doesn't
increase system performance, but it does let the users get their
reports quicker, so they at least have the illusion of better system
performance.

Trick 11 - layout data & key files in update order.  This doesn't
matter as much now that I've got SCSI-2 drives (and use
write-behind), but I still do it.  A lot of what we do here is pay
claims.  Numerous files are updated at the time a claim is finished.
Also, at some point during the day checks are printed.  At least
daily, finished claims which have had checks printed are moved from a
current claims file to a history file.  Six data files and 15 key
files have records added and deleted at that time.  I've laid those
files out on the disk in the order in which they are accessed with
the idea that the heads aren't jumping all over the place for every
access.

Trick 12 - split history files.  This trick is probably rather
specific to us.  We have a lot of history of paid claims.  The data
files alone are over 110mb.  Most reports we run for clients are of
claims paid during a particular plan year (which 11 times out of 12
is not a calendar year).  Rather than having to read as much as 8+
years of client history or changing numerous key files to make them
date sensitive, we keep history in six month chunks.  This means no
more than 3 data files will need to be accessed to read a year's
worth of history for a group.  We feel this has helped system
performance a lot.  (It also avoid the nightmare of stopping work for
a few hours should we need to rebuild a history key file.)

As I said, I'm looking forward to your seminar.  I would generally
print out something like this and proof it before sending, but time
is short.  I hope my train of thought didn't get lost too often. I'll
see you in CA.


Mike Williamson
IMA of Louisiana