I'm really looking forward to your class. I'm interested in what
other folks are doing and anything I'm doing that is useless. Here
are some of the things I do in an effort to improve system
performance. FYI, I'm running a 2000M with a Dimension-040 CPU,
AMOS/32 1.0D(165)-5 (hopefully upgrading to 2.2C this summer), 32mb
memory, 2 APU'I's, 1 355, 3 SCSI-2 ~1Gb drives, 1 145mb Micropolis,
ANDI, d/Basic, d/Vue, d/Spool, AlphaFax, PolyTrack, INMEMO, EZSPOOL,
& SuperVue. The business I work for handles medical & dental
insurance for self-insured companies.
Trick 1 - specialize the use of those disks. In general, I use one
disk for key files and another for data files. My programs, in
general, live on the system disk. Most printouts live on the old
145MB disk. In an ideal situation, the program is reading a key file
on one disk, reading the data file on another disk, and outputting a
report on a third disk. The whole idea is to minimize head movement.
Of course with multiple users accessing the disks at the same time,
the best laid plans fall apart, but I'm still betting on better
performance overall because of where the various types of files are
stored.
Trick 2 - minimize the search path. Most of my RUN's live in BAS:.
In my menus, I explicitly give the program location (BAS:) when
running a program.
Trick 3 - use small portions of the disks. Although I've got 3+Gb of
disk space, I only use about 650Mb. Each disk has 32 logicals. I
(for now) blow off 2/3's of those, only actually using 8 - 10
logicals on each. The idea here is if access is fast on a 1 Gb
drive, it ought to be faster if restricted to a small portion of the
disk.
Trick 4 - use separate bitmaps for each logical.
Trick 5 - pump up the disk cache. Over 14mb of my memory is used by
SuperDisk. I keep statistics of reads, writes, and cache hits
(captured at time of backup) and try to allocate the cache memory
between the disks based on activity (plus a lot of guessing).
Currently, I'm averaging over 98% cache hits on my system disk, over
91% on the primarily key file disk, over 88% on the data file disk,
and over 98% on the print file disk. I'll probably acquire another
16mb of memory this year and give most of that to SuperDisk.
Trick 6 - optimize with SuperDisk. You can adjust the read-aheads
for each logical from 0 to 9 blocks. I use the statistics captured
nightly to optimize this. The most use of the statistics is to see
where the actual disk activity is. I try to cluster the files such
that the primary disk activity is on one particular logical, the two
logicals beside it have less, those next out have even less, etc. I
concentrate on misses and writes, NOT on hits (since those are the
only times there is head movement). I've done a better job of this in
the past, but current statistics follow:
(My users are typically sitting on DSK4: when they run programs.)
DSK Statistics, Misses & Writes
Percentage of Total ---- Average ---- --Percentage of Hits--
Disk Misses Writes Misses Writes Cache Rd Ahd Total
0 13.799% 6.552% 3,598 4,547 58.60% 40.59% 99.19%
1 0.038% 0.000% 10 0 48.05% 19.23% 67.28%
2 1.642% 0.218% 428 151 31.39% 54.89% 86.28%
3 30.006% 1.309% 7,824 909 29.82% 48.92% 78.74%
4 37.543% 79.500% 9,789 55,171 33.88% 65.13% 99.00%
5 14.544% 8.276% 3,792 5,743 62.70% 25.60% 88.30%
6 0.644% 0.260% 168 181 26.71% 69.67% 96.37%
7 0.052% 0.012% 14 8 21.91% 70.70% 92.60%
8 1.248% 3.221% 326 2,235 16.81% 78.51% 95.32%
9 0.483% 0.652% 126 453 25.83% 69.15% 94.97%
(Print files are here.)
OLD Statistics, Misses & Writes
Percentage of Total ---- Average ---- --Percentage of Hits--
Disk Misses Writes Misses Writes Cache Rd Ahd Total
0 0.126% 0.000% 4 0 17.98% 59.98% 77.96%
1 0.292% 0.464% 9 142 24.69% 73.15% 97.84%
2 0.126% 0.000% 4 0 24.67% 44.28% 68.95%
3 0.132% 0.000% 4 0 31.90% 31.16% 63.06%
4 99.324% 99.536% 3,179 30,480 42.69% 56.05% 98.74%
100.000% 100.000% 3,200 30,622 42.66% 56.08% 98.73%
Trick 7 - use the task manager for background reports. This doesn't
increase system throughput (as far as I know), but it lets the user
ask for a set of reports which take from 1 minute to 10 minutes each
to run. The user isn't tied up waiting for one report to finish
before asking for the next. The users are able to do other things
while the task manager is handling the reports.
Trick 8 - have more than one task manager queue. Employees in the
marketing department have a set of reports to run (or have the task
manager run) two months before a group's renewal date. They dislike
having to wait for the reports to run when other folks have tied up
the task manager. I gave them their own queue, so they only have to
contend with each other's requests. Again, this doesn't speed up
system performance, but those three users do get their reporting
quicker than otherwise.
Trick 9 - do off-hour reporting, etc. There are many things we do
daily, weekly, monthly, and quarterly. Many of these are reports.
Some take over an hour to run before printing. We set the task
manager up to do these at night or on the weekends. Not only does no
one have to remember to run the programs, but they run when few
people are around thus increasing system performance during the day
(when they would have otherwise run). We currently have 33 permanent
tasks set up.
Trick 10 - numerous printers, dedicated if possible. This doesn't
increase system performance, but it does let the users get their
reports quicker, so they at least have the illusion of better system
performance.
Trick 11 - layout data & key files in update order. This doesn't
matter as much now that I've got SCSI-2 drives (and use
write-behind), but I still do it. A lot of what we do here is pay
claims. Numerous files are updated at the time a claim is finished.
Also, at some point during the day checks are printed. At least
daily, finished claims which have had checks printed are moved from a
current claims file to a history file. Six data files and 15 key
files have records added and deleted at that time. I've laid those
files out on the disk in the order in which they are accessed with
the idea that the heads aren't jumping all over the place for every
access.
Trick 12 - split history files. This trick is probably rather
specific to us. We have a lot of history of paid claims. The data
files alone are over 110mb. Most reports we run for clients are of
claims paid during a particular plan year (which 11 times out of 12
is not a calendar year). Rather than having to read as much as 8+
years of client history or changing numerous key files to make them
date sensitive, we keep history in six month chunks. This means no
more than 3 data files will need to be accessed to read a year's
worth of history for a group. We feel this has helped system
performance a lot. (It also avoid the nightmare of stopping work for
a few hours should we need to rebuild a history key file.)
As I said, I'm looking forward to your seminar. I would generally
print out something like this and proof it before sending, but time
is short. I hope my train of thought didn't get lost too often. I'll
see you in CA.