[Cerowrt-devel] [Bloat] [Cake] active sensing queue management

David Lang david at lang.hm
Sat Jun 13 00:16:32 EDT 2015


On Fri, 12 Jun 2015, Benjamin Cronce wrote:

>> On 12/06/15 02:44, David Lang wrote:
>>> On Thu, 11 Jun 2015, Sebastian Moeller wrote:
>>>
>>>>
>>>> On Jun 11, 2015, at 03:05 , Alan Jenkins
>>>> <alan.christopher.jenkins at gmail.com> wrote:
>>>>
>>>>> On 10/06/15 21:54, Sebastian Moeller wrote:
>>>>>>
>>>>> One solution would be if ISPs made sure upload is 100% provisioned.
>>>>> Could be cheaper than for (the higher rate) download.
>>>>
>>>>     Not going to happen, in my opinion, as economically unfeasible
>>>> for a publicly traded ISP. I would settle for that approach as long
>>>> as the ISP is willing to fix its provisioning so that
>>>> oversubscription episodes are reasonable rare, though.
>>>
>>> not going to happen on any network, publicly traded or not.
>
> Sorry if this is a tangent from where the current discussion has gone, but
> I wanted to correct someone saying something is "impossible".
>
<snip>
>
> I guess I went off on this tangent because "Not going to happen, in my
> opinion, as economically unfeasible" and "not going to happen on any
> network, publicly traded or not." are too absolute. It can be done, it is
> being done, it is being done for cheap, and being done with "business
> class" professionalism. Charter Comm is 1/2 the download speed for the same
> price and they don't even have symmetrical or dedicated.

not being oversubscribed includes the trunk. Who cares if there is no congestion 
within the ISP if you reach the trunk and everything comes to a screeching halt.

The reason I used the word "imossible" is because we only have the ability to 
make links so fast. Right now we have 10G common, 40G in places, and research 
into 100G, if you go back a few years, 1G was the limit. While the fastest 
connections have increased by a factor of 100, the home connections have 
increased by close to a factor of 1000 during that time (1.5Mb theoretical DSL 
vs 1Gb fiber), and 10G is getting cheap enough to be used for the corporate 
networks.

so the ratio between the fastest link that's possible and the subscribers has 
dropped from 1000:1 to 100:1 with 10:1 uncomfortably close.

some of this can be covered up by deploying more lines, but that only goes so 
far.

If you are trying to guarantee no bandwidth limits on your network under any 
conditions, you need that sort of bandwith between all possible sets of 
customers as well, which means that as you scale to more customers, your 
bandwidth requirements are going up O(n^2).

And then the pricing comes into it. 1G fiber to the home is <$100/month (if it 
exists at all) isn't going to pay for that sort of idle bandwidth.

But the good thing is, you don't actually need that much bandwidth to keep your 
customer's happy either.

If the 'guaranteed bandwidth' is written into the contracts properly, there is a 
penalty to the company if they can't provide you the bandwidth. That leaves them 
the possibility of not actually building out the O(n^2) network, just keeping 
ahead of actual reqirements, and occasionally paying a penalty if they don't.

David Lang



More information about the Cerowrt-devel mailing list