<div dir="ltr">With the disclaimer that I'm not as strong in statistics and modelling as I'd like to be....<div><br></div><div>I think it's not useful to attempt to stochastically model the behavior of what are actually active (well, reactive) components. The responses of each piece are deterministic, but the inputs (users) are not. So while you could maybe measure the behavior of a network, and then build a hidden markov model that can produce the same results, I don't see how it would be useful for testing the behavior of either the reactive components (TCP CC algs) or the layers below the reactive components (queues and links), because the model needs to react to the behavior of the pieces it's sitting on top of, not due to a stochastic process that's independent (in the statistical sense) of the underlying queues and links.</div><div><br></div><div>Probably a "well duh..." thought for many here. But I was _amazed_ when working with very senior engineers for network hardware companies, who said all testing was done with a static blend of "i-mix" traffic (in both directions), even though they were looking at last-mile network usage which was going to be primarily TCP download, just like a home, and nothing like i-mix. Or that the applications running on top of that gear were actually reactive to their (mis-)management of their queues and loads.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 9, 2021 at 4:56 PM Jonathan Morton <<a href="mailto:chromatix99@gmail.com">chromatix99@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">> On 10 Jul, 2021, at 2:01 am, Leonard Kleinrock <<a href="mailto:lk@cs.ucla.edu" target="_blank">lk@cs.ucla.edu</a>> wrote:<br>
> <br>
> No question that non-stationarity and instability are what we often see in networks. And, non-stationarity and instability are both topics that lead to very complex analytical problems in queueing theory. You can find some results on the transient analysis in the queueing theory literature (including the second volume of my Queueing Systems book), but they are limited and hard. Nevertheless, the literature does contain some works on transient analysis of queueing systems as applied to network congestion control - again limited. On the other hand, as you said, control theory addresses stability head on and does offer some tools as well, but again, it is hairy. <br>
<br>
I was just about to mention control theory.<br>
<br>
One basic characteristic of Poisson traffic is that it is inelastic, and assumes there is no control feedback whatsoever. This means it can only be a valid model when the following are both true:<br>
<br>
1: The offered load is *below* the link capacity, for all links, averaged over time.<br>
<br>
2: A high degree of statistical multiplexing exists.<br>
<br>
If 1: is not true and the traffic is truly inelastic, then the queues will inevitably fill up and congestion collapse will result, as shown from ARPANET experience in the 1980s; the solution was to introduce control feedback to the traffic, initially in the form of TCP Reno. If 2: is not true then the traffic cannot be approximated as Poisson arrivals, regardless of load relative to capacity, because the degree of correlation is too high.<br>
<br>
Taking the iPhone introduction anecdote as an illustrative example, measuring utilisation as very close to 100% is a clear warning sign that the Poisson model was inappropriate, and a control-theory approach was needed instead, to capture the feedback effects of congestion control. The high degree of statistical multiplexing inherent to a major ISP backhaul is irrelevant to that determination.<br>
<br>
Such a model would have found that the primary source of control feedback was human users giving up in disgust. However, different humans have different levels of tolerance and persistence, so this feedback was not sufficient to reduce the load sufficiently to give the majority of users a good service; instead, *all* users received a poor service and many users received no usable service. Introducing a technological control feedback, in the form of packet loss upon overflow of correctly-sized queues, improved service for everyone.<br>
<br>
(BTW, DNS becomes significantly unreliable around 1-2 seconds RTT, due to protocol timeouts, which is inherited by all applications that rely on DNS lookups. Merely reducing the delays consistently below that threshold would have improved perceived reliability markedly.)<br>
<br>
Conversely, when talking about the traffic on a single ISP subscriber's last-mile link, the Poisson model has to be discarded due to criterion 2 being false. The number of flows going to even a family household is probably in the low dozens at best. A control-theory approach can also work here.<br>
<br>
- Jonathan Morton<br>
_______________________________________________<br>
Make-wifi-fast mailing list<br>
<a href="mailto:Make-wifi-fast@lists.bufferbloat.net" target="_blank">Make-wifi-fast@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/make-wifi-fast" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/make-wifi-fast</a></blockquote></div>