In the above, the slow elapsed times for reads are due to the process context
switching off the CPU while we wait for the next keystroke. For example,
the second line shows an on-CPU time of 45 us and an elapsed time of 357210 us.
In fact, the elapsed times are equal to the inter-keystroke delays.
What about the writes? Their elapsed times are longer than the on-CPU times
also. Did we context switch off for them too? ... Lets run a different demo,
For every syscall, the elapsed time is around 10 us (microseconds) slower
than the on-cpu time. These aren't micro context switches, this is due to
DTrace slowing down the program! The more closely we measure something the
more we effect it. (See Heisenberg's uncertainty principle).
Ok, so for the above output we can tell that each elapsed time is around 10 us
longer than it should be. That's fine, since it's fairly consistant and not
a huge difference. This is an x86 server with a 867 MHz CPU.
Now lets try the same on an Ultra 5 with a 360 MHz CPU,