[Cerowrt-devel] [Bloat] [Cake] active sensing queue management

David Lang david at lang.hm
Sat Jun 13 00:30:54 EDT 2015


On Fri, 12 Jun 2015, Sebastian Moeller wrote:

> Hi David,
>
> On Jun 12, 2015, at 03:44 , David Lang <david at lang.hm> wrote:
>
>> The problem shows up when either usage changes rapidly, or the network 
>> operator is not keeping up with required upgrades as gradual usage changes 
>> happen (including when they are prevented from upgrading because a peer won't 
>> cooperate)
>
> 	Good point, I was too narrowly focussing on the access link but peering 
> is another "hot potato”. Often end users try to use traceroute and friends and 
> VPNs to uncontested peers to discern access-network congestion from 
> “under-peering” even though at the end of the day the effects are similar. 
> Thinking of it I believe that under-peering shows up more as a bandwidth loss 
> as compared to the combined bandwidth loss and latency increase often seen on 
> the access side (but this is conjecture as I have never seen traffic data from 
> a congested peering connection).

At the peering point where congestion happens, queues will expand to the max 
avaialble and packets will be dropped. Layer 3 had some posts showing stats of a 
congested peeing point back before netflix caved a year or so ago.

>>
>> As for the "100% provisioning" ideal, think through the theoretical aggregate 
>> and realize that before you get past very many layers, you get to a bandwidh 
>> requirement that it's not technically possible to provide.
>
> 	Well, I still believe that an ISP is responsible to keep its part of a 
> contract by at least offering a considerable percentage of the sold access 
> bandwidth into its own core network. But 100 is not going to be that 
> percentage, I agree and I am happy to accept congestion as long as it is 
> transiently (and I do not mean every evening it gets bad, but clears up over 
> night, but rather that the ISP increases bandwidth to keep congestion periods 
> rare)…

I think that the target the ISP should be striving for is 0 congestion, not 0 
overprovisioning. And I deliberately say "striving for" because it's not going 
to be perfect. And the question of if you are congested or not depends on the 
timescale you look at (at any instant, a given link is either 0% utilized or 
100% utilized, nowhere in between)

Good AQM will make it so that when congestion does happen, all that happens is 
that the bandwidth ends up getting shared. Everyone continues to operate with 
minimal noticable degredation (ideally all suffered by the non-time critical 
bulk data transfers)

After all, if you are streaming a video from netflix, does it really matter if 
the 2-hour movie is entirely delivered to your local box in 1 min instead of 20 
min? If you're downloading it to put it on a mobile device before you leave, 
possibly, but if you're just watching it, not at all.

David Lang


More information about the Cerowrt-devel mailing list