General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Mikael Abrahamsson <swmike@swm.pp.se>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: cerowrt-devel@lists.bufferbloat.net, bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] [Cerowrt-devel] DC behaviors today
Date: Thu, 14 Dec 2017 09:22:20 +0100 (CET)	[thread overview]
Message-ID: <alpine.DEB.2.20.1712140903240.8884@uplift.swm.pp.se> (raw)
In-Reply-To: <CAJq5cE21LU9FiF1Nr=VPoZVUw1QAezzSxX-YwuoUnrGN6VEkKw@mail.gmail.com>

On Wed, 13 Dec 2017, Jonathan Morton wrote:

> Ten times average demand estimated at time of deployment, and struggling 
> badly with peak demand a decade later, yes.  And this is the 
> transportation industry, where a decade is a *short* time - like less 
> than a year in telecoms.

I've worked in ISPs since 1999 or so. I've been at startups and I've been 
at established ISPs.

It's kind of an S curve when it comes to traffic growth, when you're 
adding customers you can easily see 100%-300% growth per year (or more). 
Then after market becomes saturated growth comes from per-customer 
increased usage, and for the past 20 years or so, this has been in the 
neighbourhood of 20-30% per year.

Running a network that congests parts of the day, it's hard to tell what 
"Quality of Experience" your customers will have. I've heard of horror 
stories from the 90ties where a then large US ISP was running an OC3 (155 
megabit/s) full most of the day. So someone said "oh, we need to upgrade 
this", and after a while, they did, to 2xOC3. Great, right? No, after that 
upgrade both OC3:s were completely congested. Ok, then upgrade to OC12 
(622 megabit/s). After that upgrade, evidently that link was not congested 
a few hours of the day, and of course needed more upgrades.

So at the places I've been, I've advocated for planning rules that say 
that when the link is peaking at 5 minute averages of more than 50% of 
link capacity, then upgrade needs to be ordered. This 50% number can be 
larger if the link aggregates larger number of customers, because 
typically your "statistical overbooking" varies less the more customers 
participates.

These devices do not do per-flow anything. They might have 10G or 100G 
link to/from it with many many millions of flows, and it's all NPU 
forwarding. Typically they might do DIFFserv-based queueing and WRED to 
mitigate excessive buffering. Today, they typically don't even do ECN 
marking (which I have advocated for, but there is not much support from 
other ISPs in this mission).

Now, on the customer access line it's a completely different matter. 
Typically people build with BRAS or similar, where (tens of) thousands of 
customers might sit on a (very expensive) access card with hundreds of 
thousands of queues per NPU. This still leaves just a few queues per 
customer, unfortunately. So these do not do per-flow anything either. This 
is where PIE comes in, because these devices like these can do PIE in the 
NPU fairly easily because it's kind of like WRED.

So back to the capacity issue. Since these devices typically aren't good 
at assuring per-customer access to the shared medium (backbone links), 
it's easier to just make sure the backbone links are not regularily full. 
This doesn't mean you're going to have 10x capacity all the time, it 
probably means you're going to be bouncing between 25-70% utilization of 
your links (for the normal case, because you need spare capacity to handle 
events that increase traffic temporarily, plus handle loss of capacity in 
case of a link fault). The upgrade might be to add another link, or a 
higher tier speed interface, bringing down the utilization to typically 
half or quarter of what you had before.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

  parent reply	other threads:[~2017-12-14  8:22 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAA93jw43M=dhPOFhMJo7f-qOq=k=kKS6ppq4o9=hsTEKoBdUpA@mail.gmail.com>
     [not found] ` <92906bd8-7bad-945d-83c8-a2f9598aac2c@lackof.org>
     [not found]   ` <CAA93jw5pRMcZmZQmRwSi_1dETEjTHhmg2iJ3A-ijuOMi+mg4+Q@mail.gmail.com>
     [not found]     ` <CAKiAkGT54RPLQ4f1tzCj9wcW=mnK7+=uJfaotw9G+H_JEy_hqQ@mail.gmail.com>
2017-12-04  4:19       ` [Bloat] " Dave Taht
2017-12-04  9:13         ` [Bloat] [Cerowrt-devel] " Mikael Abrahamsson
2017-12-04  9:31           ` Joel Wirāmu Pauling
2017-12-04 10:18             ` Mikael Abrahamsson
2017-12-04 10:27               ` Joel Wirāmu Pauling
2017-12-04 10:43                 ` Pedro Tumusok
2017-12-04 10:47                   ` Joel Wirāmu Pauling
2017-12-04 10:57                     ` Pedro Tumusok
2017-12-04 10:59                       ` Joel Wirāmu Pauling
2017-12-04 12:44                       ` Mikael Abrahamsson
2017-12-04 19:59                         ` dpreed
2017-12-04 20:29                           ` David Collier-Brown
2017-12-08  7:05                           ` Mikael Abrahamsson
2017-12-12 15:09                             ` Luca Muscariello
2017-12-12 18:36                               ` Dave Taht
2017-12-12 22:53                                 ` dpreed
2017-12-12 23:20                                   ` Jonathan Morton
2017-12-13 10:20                                     ` Mikael Abrahamsson
2017-12-13 10:45                                   ` Luca Muscariello
2017-12-13 15:26                                   ` Neil Davies
2017-12-13 16:41                                     ` Jonathan Morton
2017-12-13 18:08                                       ` dpreed
2017-12-13 19:55                                         ` Neil Davies
2017-12-13 21:06                                           ` Jonathan Morton
2017-12-14  8:22                                       ` Mikael Abrahamsson [this message]
2017-12-17 21:37                                         ` Benjamin Cronce
2017-12-18  8:11                                           ` Mikael Abrahamsson
2017-12-17 11:52                                 ` Matthias Tafelmeier
2017-12-18  7:50                                   ` Mikael Abrahamsson
2017-12-19 17:55                                     ` Matthias Tafelmeier
2017-12-27 15:15                                       ` Matthias Tafelmeier
2018-01-20 11:55                                 ` Joel Wirāmu Pauling
2017-12-04 12:41                 ` Mikael Abrahamsson
2017-12-04 10:56         ` [Bloat] Linux network is damn fast, need more use XDP (Was: DC behaviors today) Jesper Dangaard Brouer
2017-12-04 17:00           ` Dave Taht
2017-12-04 20:49             ` Joel Wirāmu Pauling
2017-12-07  8:43             ` Jesper Dangaard Brouer
2017-12-07  8:49             ` Jesper Dangaard Brouer
2017-12-04 17:19           ` Matthias Tafelmeier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.20.1712140903240.8884@uplift.swm.pp.se \
    --to=swmike@swm.pp.se \
    --cc=bloat@lists.bufferbloat.net \
    --cc=cerowrt-devel@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox