Speed, Throughput and Utilization

When we deal with computations involving time, speed and utilization, we use the following units:

Modern computers are very fast; this means that we will be concerned with very small times. For this reason, you will need to be familiar with the numerical abbreviations

In addition, when dealing with time and transfer rates the prefixes kilo, mega, giga, etc. all refer to powers of ten:

This is in contrast to computations of space requirements in which those prefixes refer to powers of two. This is a bit confusing in the beginning, but the correct usage for a given computation should be clear from the context of the problem.

Computer speeds and transfer rates are typically measured in terms of frequency (cycles per second or Hertz):

a 3 GHz CPU chip has a clock which ticks 3 billion times each second, and
a 100 MHz Ethernet segment can transfer 100 million bits per second.
In these contexts, a Hertz is one clock tick per second and one bit transferred per second, respectively.
Note that the purpose of these "clocks" is not to keep time (which they do rather poorly, in the overall scheme of things), but to measure "steps" (individual machine language instructions) as they are executed.
Since the clock time (or cycle time) is measured in seconds per clock tick or seconds per cycle, it is just the reciprocal of the frequency. So
a 733 MHz CPU chip has a cycle time of 1 / 733,000,000 seconds = 1.36 * 10-9 seconds = 1.36 ns.
the 100 MHz Ethernet segment transfers one bit in 1 / 100,000,000 seconds = 1 * 10-8 seconds = 0.01 μs = 10 ns.
A bus in a computer is simply a group of wires over which data or commands are transferred (ie., between the CPU and IDE controller). There are many types of buses found in PCs:
BusFrequency (MHz)Clock Time* (μs)Bus Width (bits)
PCI-1 (Peripheral Control Interface)330.03316
USB-1 (Universal Serial Bus)120.0831
FireWire (IEEE-1394)4000.00251
Ultra Wide SCSI320.003116
Ultra2 Wide SCSI640.001616
* some clock times are approximate
The throughput for each bus (in bits per second) is the product of the frequency and the width. Busses with a width of 1 bit are serial busses, while busses with widths greater than 1 are parallel busses. In parallel busses, the entire width of the bus is transferred for every clock tick.

Note that IDE speeds are nominally the same as PCI, but as with any bus, any particular device may function at a lower speed: the device provides the clocking. This is done to help avoid data overruns (when data arrives too fast for the device) or underruns (when data is not available quickly enough from the device). Note that these conditions can also occur on the computer side (if the hardware or software are too slow for the device).

It's interesting to compare those speeds with wireless speeds (all serial):
StandardFrequency (MHz)
Near Field Communication0.106, 0.212, 0.424

There are many factors which determine the speed of a computer:

The problems of performance measurement and enhancement can be quite complex; upgrades involving one factor may not bring noticeable improvement because of the presence of a bottleneck caused by one or more other factors.

CPU speeds are often measured in MIPS or millions of instructions per second, or in FLOPS (floating point operations per second). These two will depend both on the CPU architecture and the kind of program being run by the CPU, so there is no simple way to compare the performance of difference CPU chips.

As a unit of measure, instructions per second refers to individual machine language instructions. For instance, in order to add two numbers from RAM and store the sum into RAM, a CPU may require as many as four instructions:

  1. load the first number into a register in the CPU chip;
  2. load the second number into another register in the CPU chip;
  3. add the two numbers; and
  4. store the sum into RAM.

The speed of the add instruction will depend only on the CPU clock speed, but the other instructions will depend on the speed of the bus connection the CPU to RAM (ignoring cache). For pipelined CPUs, these four instructions may execute in less than 4 clock ticks because several of the instructions may be in various stages of interpretation or execution simultaneously in the pipeline.

Floating point operations are often very complex, and depend on the operation involved. Floating point division can take 10 times longer than floating point addition, which itself can take twice as long as an integer addition.

Most of the computations involved in computing capacity are products or quotients of numbers with well-defined units. We have seen that we can use these units to construct formulae for various computations. For instance, suppose we wish to compute the lifetime of a packet on a network segment (the length of time it takes to send the packet). We proceed as follows:

  1. The results clearly should have units of seconds:
    ? seconds = 1 packet
  2. The computation requires the size of the packet in bytes:
    assume that there are 150 bytes / packet
    as well as the frequency in bits per second:
    let the frequency = 10 MHZ = 10,000,000 bits / second
  3. There is a conversion factor of 8 bits / byte.
  4. Examining the units, we see that if we multiply the size of the packet in bytes times the conversion factor of 8 bits per byte and divide by 10,000,000 bits per second, we are left with units of seconds:
    ? seconds = 1 packet * ( 150 bytes / packet ) * ( 8 bits / byte ) / ( 10,000,000 bits / second )
    = 1.2 * 10-4 seconds
    = .12 ms
    = 120 μs
Sometimes we need to perform the same calculation with a number of values for a given parameter (ie., the packet size). For instance, we might need to know the lifetimes of packets of 150 bytes, 350 bytes and 650 bytes. We can use a simple proportional calculation to obtain the remaining answers once we have computed the first:

Or perhaps we are interested in the lifetime if we speed up the network segment to 100 MHz:

120 μs / ( 100 MHz / 10 MHz ) = 12 μs
Here we have divided by the ratio of the speeds because the speed was in the denominator of the original calculation: the lifetime is inversely proportional to the speed.

We define utilization as

the ratio of the amount of a resource actually used to the maximum amount that could possibly be used.
Utilization therefore has no units, and should be between zero and one, or between 0 and 100% (since it is not possible to use less than nothing or more than is possible). If we suppose that five hundred of our 150 byte packets are sent each second on a network segment, we can compute the utilization as follows:

  1. ? = actual throughput / maximum possible throughput
  2. Actual throughput is usually measured in bits per second for network segments (which is the same units as Hz):
    500 packets / second * 150 bytes / packet * 8 bits / byte = 600,000 bits / second
  3. Possible throughput is 10 MHz, so the utilization is
    600,000 bits / second / ( 10,000,000 bits / second ) = .06 = 6 %

Note here that the "per cent" ("%") is literally " / 100 ", and is not a unit or dimension at all.

We can use proportionalities to compute further utilizations as before:

We continue exploring these techniques with a discussion of memory and storage requirements for video and audio applications.

Go to:Title PageTable of ContentsIndex

©2017, Kenneth R. Koehler. All Rights Reserved. This document may be freely reproduced provided that this copyright notice is included.

Please send comments or suggestions to the author.