<p dir="ltr">It may depend on the application's tolerance to packet loss. A packet delayed further than the jitter buffer's tolerance counts as lost, so *IF* jitter is randomly distributed, jitter can be traded off against loss. For those purposes, standard deviation may be a valid metric.</p>
<p dir="ltr">However the more common characteristic is that delay is sometimes low (link idle) and sometimes high (buffer full) and rarely in between. In other words, delay samples are not statistically independent; loss due to jitter is bursty, and real-time applications like VoIP can't cope with that. For that reason, and due to your low temporal sampling rate, you should take the peak delay observed under load and compare it to the average during idle.</p>
<p dir="ltr"> - Jonathan Morton<br>
</p>