Queuing systems are everywhere, certainly in IT, and understanding them is essential in performance analysis.

A queue is a waiting line for requests that will be serviced, but not now. The queue is a fundamental piece of kit in IT, a CPU providing time slices, a disk servicing IO requests, a network device handling transmit request, web servers and so on.Despite queuing systems being apparently trivial, when it comes to explaining the performance of systems that rely on this apparently trivial technique applied in stacks with different characteristics, it is easy to get lost quickly.

Queuing systems are defined by quite a few characteristics, such as the number of servers, the distribution of the requests, the number of queues and so on. Many queuing systems can however be approximated by much simpler queues, the M/M/c queues for example. M/M/c queues have Poisson arrival and service time distribution, which means roughly that both arrival rate and service rate can have random deviations from an average. The *utilization* \(\rho\) for M/M/c queues is

\[

\rho = \frac{\lambda}{c \mu}

\]

where \(\lambda\) is the request arrival rate, \(\mu\) is the service rate of a single server and \(c\) is the number of servers. So the maximum arrival rate (at full utilization, \(\rho=1\)) is \(\lambda = c\mu\). A disk with an average service time of 2ms has an average service rate of \(\frac{1}{0.002}=500\) requests per second, equaling maximum arrival rate. The service time \(s\) is the inverse of the service rate, \(s = \frac{1}{\mu}\), so

\[

\rho = \frac{\lambda s }{c}

\]

is a convenient form as many tools give service times instead of service rates.

A single process generating blocking IO requests (wait until completion) against a block device does not satisfy the above formula, as the system is not a M/M/c queue. If such a process is IO bound, throughput will not increase by striping over 2 disks - as there is still only one request in the system at a time. Think of the number of servers \(c\) in the utilization as the number of servers that could possibly be active for the request issuing population. So in the single process blocking IO example, \(c=1\) and the only way to increase throughput is by increasing \(\mu\) - lowering the service time. The same holds for a single threaded process using one full CPU - it can never use two at the same time - the process can only be sped up by using faster CPUs, not by adding more.

Conversely, when the population is larger or when one or more processes generate asynchronous IO, the relation \(\lambda = c\mu\) does hold, and adding disks (with perfect striping) does increase throughput. A system with 8 threads fully utilizing 4 CPU's and a run queue of around 8 will run roughly twice as fast with 8 CPU's.

A simple relation, named after the man who proved it, is Little's Law for the number of customers in a queuing system, which applies to any queuing system:

\[

L=\lambda W

\]

where \(W\) is the average time a request spends in the system, including any waiting on the system's queue(s).

Consider this output of iostat -xm (averages since boot)

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdi 0.01 0.01 0.01 0.35 0.00 0.04 221.92 0.13 344.94 5.64 359.25 3.94 0.14

The arrival rate equals r/s + w/s = 0.36 requests per second. So the utilization is \[ \rho = \frac{\lambda * s }{c} = \frac{0.36 * 0.00394}{1} = .0014184 \] or when given as a percentage 0.14184%, matching the iostat output.

If we ignore the r/s (iostat specifies only one queue size for reads and writes), and apply Little's law \(L=\lambda W\) we can derive \(L=\text{w/s} * \text{w_await} = 0.35*0.35925=0.1257375 \approx 0.13\) matching the avgqu-sz (average number of IO requests in the system) given by iostat.

## Comments

## Very interesting points you

Submitted by Valentin (not verified) on

Very interesting points you have remarked, thanks for posting.

## Add new comment