[Cerowrt-devel] [Make-wifi-fast] [Bloat] Little's Law mea culpa, but not invalidating my main point

Aaron Wood woody77 at gmail.com
Sat Jul 17 19:56:55 EDT 2021


With the disclaimer that I'm not as strong in statistics and modelling as
I'd like to be....

I think it's not useful to attempt to stochastically model the behavior of
what are actually active (well, reactive) components.  The responses of
each piece are deterministic, but the inputs (users) are not.  So while you
could maybe measure the behavior of a network, and then build a hidden
markov model that can produce the same results, I don't see how it would be
useful for testing the behavior of either the reactive components (TCP CC
algs) or the layers below the reactive components (queues and links),
because the model needs to react to the behavior of the pieces it's sitting
on top of, not due to a stochastic process that's independent (in the
statistical sense) of the underlying queues and links.

Probably a "well duh..." thought for many here.  But I was _amazed_ when
working with very senior engineers for network hardware companies, who said
all testing was done with a static blend of "i-mix" traffic (in both
directions), even though they were looking at last-mile network usage which
was going to be primarily TCP download, just like a home, and nothing like
i-mix.  Or that the applications running on top of that gear were actually
reactive to their (mis-)management of their queues and loads.

On Fri, Jul 9, 2021 at 4:56 PM Jonathan Morton <chromatix99 at gmail.com>
wrote:

> > On 10 Jul, 2021, at 2:01 am, Leonard Kleinrock <lk at cs.ucla.edu> wrote:
> >
> > No question that non-stationarity and instability are what we often see
> in networks.  And, non-stationarity and instability are both topics that
> lead to very complex analytical problems in queueing theory.  You can find
> some results on the transient analysis in the queueing theory literature
> (including the second volume of my Queueing Systems book), but they are
> limited and hard. Nevertheless, the literature does contain some works on
> transient analysis of queueing systems as applied to network congestion
> control - again limited. On the other hand, as you said, control theory
> addresses stability head on and does offer some tools as well, but again,
> it is hairy.
>
> I was just about to mention control theory.
>
> One basic characteristic of Poisson traffic is that it is inelastic, and
> assumes there is no control feedback whatsoever.  This means it can only be
> a valid model when the following are both true:
>
> 1: The offered load is *below* the link capacity, for all links, averaged
> over time.
>
> 2: A high degree of statistical multiplexing exists.
>
> If 1: is not true and the traffic is truly inelastic, then the queues will
> inevitably fill up and congestion collapse will result, as shown from
> ARPANET experience in the 1980s; the solution was to introduce control
> feedback to the traffic, initially in the form of TCP Reno.  If 2: is not
> true then the traffic cannot be approximated as Poisson arrivals,
> regardless of load relative to capacity, because the degree of correlation
> is too high.
>
> Taking the iPhone introduction anecdote as an illustrative example,
> measuring utilisation as very close to 100% is a clear warning sign that
> the Poisson model was inappropriate, and a control-theory approach was
> needed instead, to capture the feedback effects of congestion control.  The
> high degree of statistical multiplexing inherent to a major ISP backhaul is
> irrelevant to that determination.
>
> Such a model would have found that the primary source of control feedback
> was human users giving up in disgust.  However, different humans have
> different levels of tolerance and persistence, so this feedback was not
> sufficient to reduce the load sufficiently to give the majority of users a
> good service; instead, *all* users received a poor service and many users
> received no usable service.  Introducing a technological control feedback,
> in the form of packet loss upon overflow of correctly-sized queues,
> improved service for everyone.
>
> (BTW, DNS becomes significantly unreliable around 1-2 seconds RTT, due to
> protocol timeouts, which is inherited by all applications that rely on DNS
> lookups.  Merely reducing the delays consistently below that threshold
> would have improved perceived reliability markedly.)
>
> Conversely, when talking about the traffic on a single ISP subscriber's
> last-mile link, the Poisson model has to be discarded due to criterion 2
> being false.  The number of flows going to even a family household is
> probably in the low dozens at best.  A control-theory approach can also
> work here.
>
>  - Jonathan Morton
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20210717/ec52b6e1/attachment.html>


More information about the Cerowrt-devel mailing list