[Bloat] Detecting bufferbloat from outside a node

Alan Jenkins alan.christopher.jenkins at gmail.com
Tue May 19 17:23:05 EDT 2015


On 27/04/15 13:03, Toke Høiland-Jørgensen wrote:
> Neil Davies <neil.davies at pnsol.com> writes:
>
>> I don't think that the E2E principle can manage the emerging
>> performance hazards that are arising.
>
> Well, probably not entirely (smart queueing certainly has a place). My
> worry is, however, that going too far in the other direction will turn
> into a Gordian knot of constraints, where anything that doesn't fit into
> the preconceived traffic classes is impossible to do something useful
> with.
>
> Or, to put it another way, I'd like the network to have exactly as much
> intelligence as is needed, but no more. And I'm not sure I trust my ISP
> to make that tradeoff... :(
>
>> We've seen this recently in practice: take a look at
>> http://www.martingeddes.com/how-far-can-the-internet-scale/ - it is
>> based on a real problem we'd encountered.
>
> Well that, and the post linked to from it
> (http://www.martingeddes.com/think-tank/the-future-of-the-internet-the-end-to-end-argument/),
> is certainly quite the broadside against end-to-end principle. Colour me
> intrigued.
>
>> In someways this is just control theory 101 rearing its head... in
>> another it is a large technical challenge for internet provision.
>
> It's been bugging me for a while that most control theory analysis (of
> AQMs in particular) seems to completely ignore transient behaviour and
> jump straight to the steady state.
>
> -Toke

I may be too slow and obvious to be interesting or just plain wrong, but...

A network developer at Google seems to think end-to-end is not yet 
played out.  And that they *do* have an incentive to improve behavior.

https://lists.bufferbloat.net/pipermail/bloat/2015-April/002764.html
https://lists.bufferbloat.net/pipermail/bloat/2015-April/002776.html

Pacing in sch_fq should improve video-on-demand.

HTTP/2 also provides some improvement for web traffic.  *And* the 
multiplexing should remove incentives for websites to stop forcing 
multiple connections ("sharding").  The incentive then reverses because 
connect() (still) requires an RTT.

The two big applications blamed by the article, mitigated out of 
self-interest?  :-).

I can believe dQ / other math might require more than that.  That hiding 
problems with more bandwidth doesn't scale.  ISPs suffering is more 
difficult to swallow from a customer point of view.  But still... 
'Worse is better' [just-in-time fixes] has been a very powerful 
strategy.  <rhetorical> What does the first step look like, and what is 
the cost for customers?

Strawman: How hard is a global _lower_-priority class?  Couldn't 
video-on-demand utilize it to fill an over-size buffer and then smooth 
over these 30 seconds of transient congestion?

Alan




More information about the Bloat mailing list