[Cerowrt-devel] [Bloat] [Cake] active sensing queue management

MUSCARIELLO Luca IMT/OLN luca.muscariello at orange.com
Fri Jun 12 05:52:03 EDT 2015


On 06/12/2015 03:44 AM, David Lang wrote:
> On Thu, 11 Jun 2015, Sebastian Moeller wrote:
>
>>
>> On Jun 11, 2015, at 03:05 , Alan Jenkins 
>> <alan.christopher.jenkins at gmail.com> wrote:
>>
>>> On 10/06/15 21:54, Sebastian Moeller wrote:
>>>
>>> One solution would be if ISPs made sure upload is 100% provisioned. 
>>> Could be cheaper than for (the higher rate) download.
>>
>>     Not going to happen, in my opinion, as economically unfeasible 
>> for a publicly traded ISP. I would settle for that approach as long 
>> as the ISP is willing to fix its provisioning so that 
>> oversubscription episodes are reasonable rare, though.
>
> not going to happen on any network, publicly traded or not.
>
> The question is not "can the theoretical max of all downstream devices 
> exceed the upstream bandwidth" because that answer is going to be 
> "yes" for every network built, LAN or WAN, but rather "does the demand 
> in practice of the combined downstream devices exceed the upstream 
> bandwidth for long enough to be a problem"
>
> it's not even a matter of what percentage are they oversubscribed.
>
> someone with 100 1.5Mb DSL lines downstream and a 50Mb upstream (30% 
> of theoretical requirements) is probably a lot worse than someone with 
> 100 1G lines downstream and a 10G upstream (10% of theoretical 
> requirements) because it's far less likely that the users of the 1G 
> lines are actually going to saturate them (let alone simultaniously 
> for a noticable timeframe), while it's very likely that the users of 
> the 1.5M DSL lines are going to saturate their lines for extended 
> timeframes.
>
> The problem shows up when either usage changes rapidly, or the network 
> operator is not keeping up with required upgrades as gradual usage 
> changes happen (including when they are prevented from upgrading 
> because a peer won't cooperate)

Good points. Let me add a side comment though.
We observe  that fiber users (e.g. 1Gbps/300Mbps access with GPON) are 
changing behavior
w.r.t. DSL users in the way they use the uplink. This is mostly (not 
only) due to personal cloud storage availability
and the fact that everyone today is able to produce tons of  big videos, 
that people are willing to store
outside the home.
As a result it's not unlikely that a backhaul link utilization may get 
out of the network planning, which is made
on long term statistics. These workloads are unpredictable and if on one 
hand it's not feasible to
over provision based on such unpredictable long peeks  on the other hand 
you'd need smart queue
management to cope with such events, where the bottleneck is the backhaul.

Considering the cost of current equipment upgrades, I feel like very 
high speed accesses will impose
smart queue management everywhere from the access up to transit links, 
including the entire backhaul.
Bad news is that no such queuing systems are available in current 
equipment, so I guess the process
will be pretty slow to happen.

>
> As for the "100% provisioning" ideal, think through the theoretical 
> aggregate and realize that before you get past very many layers, you 
> get to a bandwidh requirement that it's not technically possible to 
> provide.
>
> David Lang
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
> .
>




More information about the Cerowrt-devel mailing list