[Cake] [Bloat] active sensing queue management

Benjamin Cronce bcronce at gmail.com
Fri Jun 12 12:51:06 EDT 2015


> On 12/06/15 02:44, David Lang wrote:
> > On Thu, 11 Jun 2015, Sebastian Moeller wrote:
> >
> >>
> >> On Jun 11, 2015, at 03:05 , Alan Jenkins
> >> <alan.christopher.jenkins at gmail.com> wrote:
> >>
> >>> On 10/06/15 21:54, Sebastian Moeller wrote:
> > >>>
> >>> One solution would be if ISPs made sure upload is 100% provisioned.
> >>> Could be cheaper than for (the higher rate) download.
> >>
> >>     Not going to happen, in my opinion, as economically unfeasible
> >> for a publicly traded ISP. I would settle for that approach as long
> >> as the ISP is willing to fix its provisioning so that
> >> oversubscription episodes are reasonable rare, though.
> >
> > not going to happen on any network, publicly traded or not.

Sorry if this is a tangent from where the current discussion has gone, but
I wanted to correct someone saying something is "impossible".

Funny this was discussed. My ISP sells all packages as symmetrical,
"dedicated", and business grade. They define "dedicated" as never
oversubscribed. They do oversubscribe the trunk, obviously, but from the
last mile to the core, no oversubscription.
It's a small ISP, so one of the times that I called on the weekend, I get a
senior network engineer that answered the support line. He told me they did
try oversubscription when they first rolled out DSL, but managing
congestion is a huge waste of time and increased customer support calls.
They eventually went "dedicated" and never looked back. Support calls were
reduced enough to make it worth their while.

In their ToS, they have a net neutrality promise to NEVER block, shape,
degrade, or prioritize any traffic, including their own or their partner's.

I can understand taking this approach with 1Mb DSL or even 50Mb fiber where
your uplinks are magnitudes faster. I don't know if they can continue this
approach into the 1Gb and future 10Gb PON speeds.

Prices: $40 for 20/20, $70 for 70/70, $90 for 100/100, $200 for 250/250,
/29 static block for $10/m available for all packages. And no extra/hidden
fees. Those prices are exactly what you pay, plus sales tax.
My trace route is a single hop from Level 3, which is their exclusive
transit provider. No data caps, "dedicated", told me I should never have
congestion and I should always get my provisioned rates and to call if I
get anything less than perfect.
I did take them up on this once. I was getting a 20ms ping where I normally
got a 13ms ping and I was only getting 45Mb/s on my 50Mb package at the
time. I called them, they put me in direct contact with an engineer. 15
minutes later the problem was fixed, 15 minutes after that the engineer
called back, told me they were under a DDOS attack and had to activate one
of their many failover links to Level 3 to increase their bandwidth.

Another time I was having brief transient packetloss. I called them at 2am
saying I was having an issue, they told me they'd have someone at my house
to fix the issue asap in the morning. 8am sharp, a knock on the door.
Turned out the CAT5e that existed in the house had a bad crimp.

When I signed up for their fiber service, they were still deploying fiber
all around the city. They actually gave me a direct personal contact during
the entire process. I was called the day before I was supposed to get my
connection and was told my dedicated fiber ring had an issue and
installation was going to be pushed out a week.
I told them I need the Internet for my job and I just dropped my other
provider. I was told they'd call me back. A few hours later, the entire
road was filled with a large work crew. They fixed my fiber ring and I got
my internet on the day promised. No installation fee.

I guess I went off on this tangent because "Not going to happen, in my
opinion, as economically unfeasible" and "not going to happen on any
network, publicly traded or not." are too absolute. It can be done, it is
being done, it is being done for cheap, and being done with "business
class" professionalism. Charter Comm is 1/2 the download speed for the same
price and they don't even have symmetrical or dedicated.

P.S. I think I messed up the chain. Sorry. Still getting used to this.

>
> Sure, I'm flailing.  Note this was in the context of AQSM as Daniel
> describes it.  (Possibly misnamed given it only drops.  All the queuing
> is "underneath" AQSM, "in the MAC layer" as the paper says :).
>
> - AQSM isn't distinguishing up/down bloat.  When it detects bloat it has
> to limit both directions in equal proportion.
>
> => if there is upload contention (and your user is uploading), you may
> hurt apps sensitive to download bandwidth (streaming video), when you
> don't need to.
>
> What would the solutions look like?
>
> i) If contention in one direction was negligible, you could limit the
> other direction only.  Consumer connections are highly asymmetric, and
> AQSM is only measuring the first IP hop.  So it's more feasible than
> 100% in both directions.  And this isn't about core networks (with
> larger statistical universes... whether that helps or not).
>
> I'm sure you're right and they're not asymmetric _enough_.
>
>
> ii) Sebastian points out if you implement AQSM in the modem (as the
> paper claims :p), you may as well BQL the modem drivers and run AQM.
> *But that doesn't work on ingress* - ingress requires tbf/htb with a set
> rate - but the achievable rate is lower in peak hours. So run AQSM on
> ingress only!  Point being that download bloat could be improved without
> changing the other end (CMTS).
>
> >
> > The question is not "can the theoretical max of all downstream devices
> > exceed the upstream bandwidth" because that answer is going to be
> > "yes" for every network built, LAN or WAN, but rather "does the demand
> > in practice of the combined downstream devices exceed the upstream
> > bandwidth for long enough to be a problem"
> >
> > it's not even a matter of what percentage are they oversubscribed.
> >
> > someone with 100 1.5Mb DSL lines downstream and a 50Mb upstream (30%
> > of theoretical requirements) is probably a lot worse than someone with
> > 100 1G lines downstream and a 10G upstream (10% of theoretical
> > requirements) because it's far less likely that the users of the 1G
> > lines are actually going to saturate them (let alone simultaniously
> > for a noticable timeframe), while it's very likely that the users of
> > the 1.5M DSL lines are going to saturate their lines for extended
> > timeframes.
> >
> > The problem shows up when either usage changes rapidly, or the network
> > operator is not keeping up with required upgrades as gradual usage
> > changes happen (including when they are prevented from upgrading
> > because a peer won't cooperate)
> >
> > As for the "100% provisioning" ideal, think through the theoretical
> > aggregate and realize that before you get past very many layers, you
> > get to a bandwidh requirement that it's not technically possible to
> > provide.
> >
> > David Lang


On Fri, Jun 12, 2015 at 4:52 AM, MUSCARIELLO Luca IMT/OLN <
luca.muscariello at orange.com> wrote:

> On 06/12/2015 03:44 AM, David Lang wrote:
>
>> On Thu, 11 Jun 2015, Sebastian Moeller wrote:
>>
>>
>>> On Jun 11, 2015, at 03:05 , Alan Jenkins <
>>> alan.christopher.jenkins at gmail.com> wrote:
>>>
>>>  On 10/06/15 21:54, Sebastian Moeller wrote:
>>>>
>>>> One solution would be if ISPs made sure upload is 100% provisioned.
>>>> Could be cheaper than for (the higher rate) download.
>>>>
>>>
>>>     Not going to happen, in my opinion, as economically unfeasible for a
>>> publicly traded ISP. I would settle for that approach as long as the ISP is
>>> willing to fix its provisioning so that oversubscription episodes are
>>> reasonable rare, though.
>>>
>>
>> not going to happen on any network, publicly traded or not.
>>
>> The question is not "can the theoretical max of all downstream devices
>> exceed the upstream bandwidth" because that answer is going to be "yes" for
>> every network built, LAN or WAN, but rather "does the demand in practice of
>> the combined downstream devices exceed the upstream bandwidth for long
>> enough to be a problem"
>>
>> it's not even a matter of what percentage are they oversubscribed.
>>
>> someone with 100 1.5Mb DSL lines downstream and a 50Mb upstream (30% of
>> theoretical requirements) is probably a lot worse than someone with 100 1G
>> lines downstream and a 10G upstream (10% of theoretical requirements)
>> because it's far less likely that the users of the 1G lines are actually
>> going to saturate them (let alone simultaniously for a noticable
>> timeframe), while it's very likely that the users of the 1.5M DSL lines are
>> going to saturate their lines for extended timeframes.
>>
>> The problem shows up when either usage changes rapidly, or the network
>> operator is not keeping up with required upgrades as gradual usage changes
>> happen (including when they are prevented from upgrading because a peer
>> won't cooperate)
>>
>
> Good points. Let me add a side comment though.
> We observe  that fiber users (e.g. 1Gbps/300Mbps access with GPON) are
> changing behavior
> w.r.t. DSL users in the way they use the uplink. This is mostly (not only)
> due to personal cloud storage availability
> and the fact that everyone today is able to produce tons of  big videos,
> that people are willing to store
> outside the home.
> As a result it's not unlikely that a backhaul link utilization may get out
> of the network planning, which is made
> on long term statistics. These workloads are unpredictable and if on one
> hand it's not feasible to
> over provision based on such unpredictable long peeks  on the other hand
> you'd need smart queue
> management to cope with such events, where the bottleneck is the backhaul.
>
> Considering the cost of current equipment upgrades, I feel like very high
> speed accesses will impose
> smart queue management everywhere from the access up to transit links,
> including the entire backhaul.
> Bad news is that no such queuing systems are available in current
> equipment, so I guess the process
> will be pretty slow to happen.
>
>
>> As for the "100% provisioning" ideal, think through the theoretical
>> aggregate and realize that before you get past very many layers, you get to
>> a bandwidh requirement that it's not technically possible to provide.
>>
>> David Lang
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>> .
>>
>>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20150612/69c8554d/attachment-0002.html>


More information about the Cake mailing list