[Cerowrt-devel] [Cake] [Bloat] active sensing queue management

Benjamin Cronce bcronce at gmail.com
Sat Jun 13 01:37:03 EDT 2015


> On Fri, 12 Jun 2015, Benjamin Cronce wrote:
>
> >> On 12/06/15 02:44, David Lang wrote:
> >>> On Thu, 11 Jun 2015, Sebastian Moeller wrote:
> >>>
> >>>>
> >>>> On Jun 11, 2015, at 03:05 , Alan Jenkins
> >>>> <alan.christopher.jenkins at gmail.com> wrote:
> >>>>
> >>>>> On 10/06/15 21:54, Sebastian Moeller wrote:
> >>>>>>
> >>>>> One solution would be if ISPs made sure upload is 100% provisioned.
> >>>>> Could be cheaper than for (the higher rate) download.
> >>>>
> >>>>     Not going to happen, in my opinion, as economically unfeasible
> >>>> for a publicly traded ISP. I would settle for that approach as long
> >>>> as the ISP is willing to fix its provisioning so that
> >>>> oversubscription episodes are reasonable rare, though.
> >>>
> >>> not going to happen on any network, publicly traded or not.
> >
> > Sorry if this is a tangent from where the current discussion has gone,
but
> > I wanted to correct someone saying something is "impossible".
> >
> <snip>
> >
> > I guess I went off on this tangent because "Not going to happen, in my
> > opinion, as economically unfeasible" and "not going to happen on any
> > network, publicly traded or not." are too absolute. It can be done, it
is
> > being done, it is being done for cheap, and being done with "business
> > class" professionalism. Charter Comm is 1/2 the download speed for the
same
> > price and they don't even have symmetrical or dedicated.
>
> not being oversubscribed includes the trunk. Who cares if there is no
congestion
> within the ISP if you reach the trunk and everything comes to a
screeching halt.
>
> The reason I used the word "imossible" is because we only have the
ability to
> make links so fast. Right now we have 10G common, 40G in places, and
research
> into 100G, if you go back a few years, 1G was the limit. While the
fastest
> connections have increased by a factor of 100, the home connections have
> increased by close to a factor of 1000 during that time (1.5Mb
theoretical DSL
> vs 1Gb fiber), and 10G is getting cheap enough to be used for the
corporate
> networks.

100Gb ports only cost about $5k and you can purchase, for unlisted prices,
36Tb/s mutliplexers over a single fiber with 1200km ranges. That's enough
bandwidth for 360 100Gb ports. Congestion on trunks are easy because of the
large number of flows, you can have strong guarantees via statistical
multiplexing. The last mile is the main issue because that's where a single
person can make a difference.

>
> so the ratio between the fastest link that's possible and the subscribers
has
> dropped from 1000:1 to 100:1 with 10:1 uncomfortably close.
>
> some of this can be covered up by deploying more lines, but that only
goes so
> far.
>
> If you are trying to guarantee no bandwidth limits on your network under
any
> conditions, you need that sort of bandwith between all possible sets of
> customers as well, which means that as you scale to more customers, your
> bandwidth requirements are going up O(n^2).

O(n^2) is only needed if you desire a fully connected graph. When I have a
100Mb connection, it doesn't mean I get a 100Mb connection to every
customer at the same time, I get 100Mb total. Otherwise with 1000 customer,
that would mean I would have to have 100Gb link at home. You only need O(n)
scaling. Anyway, I don't think you fully appreciate modern 1
petabit(500Tb/s ingress+egress) line rate core routers.

This is all commercially available hardware, but would of course be
incredibly expensive. 1 petabit router, with 1,000 500Gb/s ports, each
500Gb port plugs into a fiber chassis with 500 1Gb ports. Now you have
500,000 people all with fully dedicated 1Gb/s connections. Of course you do
not have any internet access with this setup because all of the router's
ports are used for connecting customers. Cut the customers in half, 250k
customer, that frees up half of the ports. Now you have 500 500Gb ports
freed up. Take those 500Gb ports, plug those into 36Tb/s multiplexers, get
14 strands of fiber, 7 up and 7 down, and run them to your local exchange
points. Mind you this system would have 10x more bandwidth than the entire
world wide internet peak data usage. That includes all CDNs, trunks,
everything.

Of course the average user consumes less than 10Mb/s during peak hours,
even if they have a 1Gb connection. All that matters is peak usage.
Everything about society is based on actual usage, not theoreticals that
are about as likely as a duplicate Earth appearing next to us in space.
Yes, it is possible, just not likely. Our electrical infrastructure could
not handle every user at once, neither can out water, roads, phones,
hospitals, stores, etc.

> And then the pricing comes into it. 1G fiber to the home is <$100/month
(if it
> exists at all) isn't going to pay for that sort of idle bandwidth.
>
> But the good thing is, you don't actually need that much bandwidth to
keep your
> customer's happy either.
>
> If the 'guaranteed bandwidth' is written into the contracts properly,
there is a
> penalty to the company if they can't provide you the bandwidth. That
leaves them
> the possibility of not actually building out the O(n^2) network, just
keeping
> ahead of actual reqirements, and occasionally paying a penalty if they
don't.
>
> David Lang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20150613/27834ee2/attachment-0002.html>


More information about the Cerowrt-devel mailing list