General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] datapoint from one vendor regarding bloat
@ 2019-04-11 10:38 Mikael Abrahamsson
  2019-04-11 12:45 ` Sebastian Moeller
  2019-04-11 17:54 ` Jonathan Morton
  0 siblings, 2 replies; 12+ messages in thread
From: Mikael Abrahamsson @ 2019-04-11 10:38 UTC (permalink / raw)
  To: bloat

[-- Attachment #1: Type: text/plain, Size: 1560 bytes --]


Hi,

I talked to Nokia (former Alcatel/Lucent equipment) regarding their 
typical buffer settings on BNG. I thought their answer might be relevant 
as a data point for people to have when they do testing:

https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf

"mbs and cbs — The mbs defines the MBS for the PIR bucket and the cbs 
defines the CBS for the CIR bucket, both can be configured in bytes or 
kilobytes. Note that the PIR MBS applies to high burst priority packets 
(these are packets whose classification match criteria is configured with 
priority high at the ingress and are in-profile packets at the egress). 
Range: mbs=0 to 4194304 bytes; cbs=0 to 4194304 bytes Note: mbs=0 prevents 
any traffic from being forwarded. Default: mbs=10ms of traffic or 64KB if 
PIR=max; cbs=10ms of traffic or 64KB if CIR=max"

So the default setting is that they have a 10ms buffer and if a packet is 
trying to be inserted into this buffer and it's 10ms full, then that 
packet will instead be dropped.

They claimed most of their customers (ISPs) just went with this setting 
and didn't change it.

Do we have a way to test this kind of setting from the outside, for 
instance by sending a large chunk of data at wirespeed and then checking 
the characteristics of the buffering/drop for this burst of packets at 
receive side?

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 10:38 [Bloat] datapoint from one vendor regarding bloat Mikael Abrahamsson
@ 2019-04-11 12:45 ` Sebastian Moeller
  2019-04-11 17:33   ` Sebastian Moeller
  2019-04-11 17:54 ` Jonathan Morton
  1 sibling, 1 reply; 12+ messages in thread
From: Sebastian Moeller @ 2019-04-11 12:45 UTC (permalink / raw)
  To: bloat, Mikael Abrahamsson

[-- Attachment #1: Type: text/plain, Size: 1938 bytes --]

Interesting! Thanks for sharing. I wonder what the netalyzr buffertest would report for these?

Best Regards
        Sebastian

On April 11, 2019 12:38:54 PM GMT+02:00, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>
>Hi,
>
>I talked to Nokia (former Alcatel/Lucent equipment) regarding their 
>typical buffer settings on BNG. I thought their answer might be
>relevant 
>as a data point for people to have when they do testing:
>
>https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf
>
>"mbs and cbs — The mbs defines the MBS for the PIR bucket and the cbs 
>defines the CBS for the CIR bucket, both can be configured in bytes or 
>kilobytes. Note that the PIR MBS applies to high burst priority packets
>
>(these are packets whose classification match criteria is configured
>with 
>priority high at the ingress and are in-profile packets at the egress).
>
>Range: mbs=0 to 4194304 bytes; cbs=0 to 4194304 bytes Note: mbs=0
>prevents 
>any traffic from being forwarded. Default: mbs=10ms of traffic or 64KB
>if 
>PIR=max; cbs=10ms of traffic or 64KB if CIR=max"
>
>So the default setting is that they have a 10ms buffer and if a packet
>is 
>trying to be inserted into this buffer and it's 10ms full, then that 
>packet will instead be dropped.
>
>They claimed most of their customers (ISPs) just went with this setting
>
>and didn't change it.
>
>Do we have a way to test this kind of setting from the outside, for 
>instance by sending a large chunk of data at wirespeed and then
>checking 
>the characteristics of the buffering/drop for this burst of packets at 
>receive side?
>
>-- 
>Mikael Abrahamsson    email: swmike@swm.pp.se

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 2396 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 12:45 ` Sebastian Moeller
@ 2019-04-11 17:33   ` Sebastian Moeller
  0 siblings, 0 replies; 12+ messages in thread
From: Sebastian Moeller @ 2019-04-11 17:33 UTC (permalink / raw)
  To: bloat, Mikael Abrahamsson

[-- Attachment #1: Type: text/plain, Size: 2215 bytes --]

I just noticed netalyzr shut down last month....

On April 11, 2019 2:45:19 PM GMT+02:00, Sebastian Moeller <moeller0@gmx.de> wrote:
>Interesting! Thanks for sharing. I wonder what the netalyzr buffertest
>would report for these?
>
>Best Regards
>        Sebastian
>
>On April 11, 2019 12:38:54 PM GMT+02:00, Mikael Abrahamsson
><swmike@swm.pp.se> wrote:
>>
>>Hi,
>>
>>I talked to Nokia (former Alcatel/Lucent equipment) regarding their 
>>typical buffer settings on BNG. I thought their answer might be
>>relevant 
>>as a data point for people to have when they do testing:
>>
>>https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf
>>
>>"mbs and cbs — The mbs defines the MBS for the PIR bucket and the cbs 
>>defines the CBS for the CIR bucket, both can be configured in bytes or
>
>>kilobytes. Note that the PIR MBS applies to high burst priority
>packets
>>
>>(these are packets whose classification match criteria is configured
>>with 
>>priority high at the ingress and are in-profile packets at the
>egress).
>>
>>Range: mbs=0 to 4194304 bytes; cbs=0 to 4194304 bytes Note: mbs=0
>>prevents 
>>any traffic from being forwarded. Default: mbs=10ms of traffic or 64KB
>>if 
>>PIR=max; cbs=10ms of traffic or 64KB if CIR=max"
>>
>>So the default setting is that they have a 10ms buffer and if a packet
>>is 
>>trying to be inserted into this buffer and it's 10ms full, then that 
>>packet will instead be dropped.
>>
>>They claimed most of their customers (ISPs) just went with this
>setting
>>
>>and didn't change it.
>>
>>Do we have a way to test this kind of setting from the outside, for 
>>instance by sending a large chunk of data at wirespeed and then
>>checking 
>>the characteristics of the buffering/drop for this burst of packets at
>
>>receive side?
>>
>>-- 
>>Mikael Abrahamsson    email: swmike@swm.pp.se
>
>-- 
>Sent from my Android device with K-9 Mail. Please excuse my brevity.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

[-- Attachment #2: Type: text/html, Size: 2715 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 10:38 [Bloat] datapoint from one vendor regarding bloat Mikael Abrahamsson
  2019-04-11 12:45 ` Sebastian Moeller
@ 2019-04-11 17:54 ` Jonathan Morton
  2019-04-11 18:00   ` Holland, Jake
                     ` (2 more replies)
  1 sibling, 3 replies; 12+ messages in thread
From: Jonathan Morton @ 2019-04-11 17:54 UTC (permalink / raw)
  To: Mikael Abrahamsson; +Cc: bloat

> On 11 Apr, 2019, at 1:38 pm, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
> 
> The mbs defines the MBS for the PIR bucket and the cbs defines the CBS for the CIR bucket

What do these lumps of jargon refer to?

 - Jonathan Morton

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 17:54 ` Jonathan Morton
@ 2019-04-11 18:00   ` Holland, Jake
  2019-04-11 18:28     ` Jonathan Morton
  2019-04-11 18:02   ` Jan Ceuleers
  2019-04-11 18:27   ` Luca Muscariello
  2 siblings, 1 reply; 12+ messages in thread
From: Holland, Jake @ 2019-04-11 18:00 UTC (permalink / raw)
  To: Jonathan Morton, Mikael Abrahamsson; +Cc: bloat

MBS = maximum burst size
PIR = peak information rate
CBS = committed burst size
CIR = committed information rate

Pages 1185 thru 1222 of the referenced doc* are actually really interesting reading
and an excellent walk-through of their token bucket concept and how to use it.

Best,
Jake

*
https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf


On 2019-04-11, 10:54, "Jonathan Morton" <chromatix99@gmail.com> wrote:

    > On 11 Apr, 2019, at 1:38 pm, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
    > 
    > The mbs defines the MBS for the PIR bucket and the cbs defines the CBS for the CIR bucket
    
    What do these lumps of jargon refer to?
    
     - Jonathan Morton
    _______________________________________________
    Bloat mailing list
    Bloat@lists.bufferbloat.net
    https://lists.bufferbloat.net/listinfo/bloat
    


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 17:54 ` Jonathan Morton
  2019-04-11 18:00   ` Holland, Jake
@ 2019-04-11 18:02   ` Jan Ceuleers
  2019-04-11 18:27   ` Luca Muscariello
  2 siblings, 0 replies; 12+ messages in thread
From: Jan Ceuleers @ 2019-04-11 18:02 UTC (permalink / raw)
  To: bloat

On 11/04/2019 19:54, Jonathan Morton wrote:
>> On 11 Apr, 2019, at 1:38 pm, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
>>
>> The mbs defines the MBS for the PIR bucket and the cbs defines the CBS for the CIR bucket
> 
> What do these lumps of jargon refer to?

If you're truly interested in the answer: the document Mikael links to
contains the answers.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 17:54 ` Jonathan Morton
  2019-04-11 18:00   ` Holland, Jake
  2019-04-11 18:02   ` Jan Ceuleers
@ 2019-04-11 18:27   ` Luca Muscariello
  2 siblings, 0 replies; 12+ messages in thread
From: Luca Muscariello @ 2019-04-11 18:27 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Mikael Abrahamsson, bloat

[-- Attachment #1: Type: text/plain, Size: 518 bytes --]

Defs

https://tools.ietf.org/html/rfc2697


On Thu 11 Apr 2019 at 19:54, Jonathan Morton <chromatix99@gmail.com> wrote:

> > On 11 Apr, 2019, at 1:38 pm, Mikael Abrahamsson <swmike@swm.pp.se>
> wrote:
> >
> > The mbs defines the MBS for the PIR bucket and the cbs defines the CBS
> for the CIR bucket
>
> What do these lumps of jargon refer to?
>
>  - Jonathan Morton
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>

[-- Attachment #2: Type: text/html, Size: 1150 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 18:00   ` Holland, Jake
@ 2019-04-11 18:28     ` Jonathan Morton
  2019-04-11 23:56       ` Holland, Jake
  2019-04-12  9:47       ` Mikael Abrahamsson
  0 siblings, 2 replies; 12+ messages in thread
From: Jonathan Morton @ 2019-04-11 18:28 UTC (permalink / raw)
  To: Holland, Jake; +Cc: Mikael Abrahamsson, bloat

> On 11 Apr, 2019, at 9:00 pm, Holland, Jake <jholland@akamai.com> wrote:
> 
> MBS = maximum burst size
> PIR = peak information rate
> CBS = committed burst size
> CIR = committed information rate

Ah, this is enough to map the terms onto my prior knowledge of TBFs.  (In my considered opinion, TBFs are obsolete technology for shaping - but there is a lot of deployed hardware still using them.)

So what this boils down to is a two-stage TBF policer.  From idle, such a system will let a burst of traffic through unfiltered, then start dropping once the bucket is empty; the bucket is refilled at some configured rate.  The two-stage system allows implementation of "PowerBoost" style policies.

The practical effect is that if there's a 10ms burst permitted, there may be 10ms of traffic collecting in some downstream dumb FIFO.  This depends on fine details of the network topology, but this is the main reason I implemented a deficit-mode "virtual clock" shaper in Cake, which has no initial burst.  With that said, 10ms isn't too bad in itself.

A question I would ask, though, is whether that 10ms automatically scales to the actual link rate, or whether it is pre-calculated for the fastest rate and then actually turns into a larger time value when the link rate drops.  That's a common fault with sizing FIFOs, too.

> Pages 1185 thru 1222 of the referenced doc* are actually really interesting reading
> and an excellent walk-through of their token bucket concept and how to use it.

Nearly 40 pages?  I have work to do, y'know!

I did just glance through it, and it looks like exactly the sort of arcane system which ISPs would *want* to leave well alone in its default configuration, or make only the simplest and easiest-to-understand changes to.  There's obviously a lot of support for Diffserv designed into it, but nobody really knows how to configure a given Diffserv implementation to work well in the general case, simply because Diffserv itself is under-specified.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 18:28     ` Jonathan Morton
@ 2019-04-11 23:56       ` Holland, Jake
  2019-04-12  0:37         ` Jonathan Morton
  2019-04-12  9:47       ` Mikael Abrahamsson
  1 sibling, 1 reply; 12+ messages in thread
From: Holland, Jake @ 2019-04-11 23:56 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Mikael Abrahamsson, bloat

On 2019-04-11, 11:29, "Jonathan Morton" <chromatix99@gmail.com> wrote:
> A question I would ask, though, is whether that 10ms automatically scales to the actual link rate, or whether it is pre-calculated for the fastest rate and then actually turns into a larger time value when the link rate drops.  That's a common fault with sizing FIFOs, too.

That's an interesting question and maybe a useful experiment, if somebody's
got one of these boxes.

But in practice do you expect link speed changes to be a major issue?  Would
this just be an extra point that should also be tuned if the max link speed is
changed dramatically by config and the policer is turned on (so there's, say, a
5% increased chance of misconfig and a reasonably diagnosable problem that a
knowledgebase post somewhere would end up covering once somebody's dug into it),
or is there a deeper problem if it's pre-calculated for the fastest rate?

-Jake



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 23:56       ` Holland, Jake
@ 2019-04-12  0:37         ` Jonathan Morton
  2019-04-12  0:45           ` Holland, Jake
  0 siblings, 1 reply; 12+ messages in thread
From: Jonathan Morton @ 2019-04-12  0:37 UTC (permalink / raw)
  To: Holland, Jake; +Cc: Mikael Abrahamsson, bloat

> On 12 Apr, 2019, at 2:56 am, Holland, Jake <jholland@akamai.com> wrote:
> 
> But in practice do you expect link speed changes to be a major issue?

For wireline, consider ADSL2+.  Maximum downstream link speed is 24Mbps, impaired somewhat by ATM framing but let's ignore that for now.  A basic "poverty package" might limit to 4Mbps; already a 6:1 ratio.  In rural areas the "last mile" copper may be so long and noisy for certain individual subscribers that only 128Kbps is available; this is now a 192:1 ratio, turning 10ms into almost 2 seconds if uncompensated from the headline 24Mbps rate.  Mind you, 10ms is too short to get even a single packet through at 128Kbps, so you'd need to put in a failsafe.

That's on wireline, where link speed changes are relatively infrequent and usually minor, so it's easy to signal changes back to some discrete policer box (usually called a BRAS in an ADSL context).  That may be what you have in mind.

One could, and should, also consider wireless technologies.  A given handset on a given SIM card may expect 100Mbps LTE under ideal conditions, in a major city during the small hours, but might only have a dodgy EDGE signal on a desolate hilltop a few hours later.  (Here in Finland, cell coverage was greatly improved in rural areas by cascading old 2G equipment from urban areas that received 3G upgrades, so this is not at all uncommon.)  In general, wireless links change rate rapidly and unpredictably in reaction to propagation conditions as the handset moves (or, for fixed stations, as the weather changes), and the ratio of possible link rates is even more severe than the ADSL example above.

Often a "poverty package" is implemented through a shaper rather than a policer, backed by a dumb FIFO on which no right-sizing has been considered (even though the link rate is obviously known in advance).  On one of these, I have personally experienced 40+ seconds of delay, rendering the connection useless for other purposes while any sort of sustained download was in progress.  In fact, that's one of the incidents which got me seriously interested in congestion control; at the time, I hacked the Linux TCP stack to right-size the receive window, and directed most of my traffic through a proxy running on that machine.  This was sufficient to restore some usability.

I find it notable that ISPs mostly consider only policers for congestion signalling, and rarely deploy even these to all the places where congestion may reasonably occur.

 - Jonathan Morton


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-12  0:37         ` Jonathan Morton
@ 2019-04-12  0:45           ` Holland, Jake
  0 siblings, 0 replies; 12+ messages in thread
From: Holland, Jake @ 2019-04-12  0:45 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Mikael Abrahamsson, bloat

Ah, I see what you mean.  Yes, this makes sense as a major concern worth checking,
thanks for explaining.

-Jake

On 2019-04-11, 17:37, "Jonathan Morton" <chromatix99@gmail.com> wrote:

    > On 12 Apr, 2019, at 2:56 am, Holland, Jake <jholland@akamai.com> wrote:
    > 
    > But in practice do you expect link speed changes to be a major issue?
    
    For wireline, consider ADSL2+.  Maximum downstream link speed is 24Mbps, impaired somewhat by ATM framing but let's ignore that for now.  A basic "poverty package" might limit to 4Mbps; already a 6:1 ratio.  In rural areas the "last mile" copper may be so long and noisy for certain individual subscribers that only 128Kbps is available; this is now a 192:1 ratio, turning 10ms into almost 2 seconds if uncompensated from the headline 24Mbps rate.  Mind you, 10ms is too short to get even a single packet through at 128Kbps, so you'd need to put in a failsafe.
    
    That's on wireline, where link speed changes are relatively infrequent and usually minor, so it's easy to signal changes back to some discrete policer box (usually called a BRAS in an ADSL context).  That may be what you have in mind.
    
    One could, and should, also consider wireless technologies.  A given handset on a given SIM card may expect 100Mbps LTE under ideal conditions, in a major city during the small hours, but might only have a dodgy EDGE signal on a desolate hilltop a few hours later.  (Here in Finland, cell coverage was greatly improved in rural areas by cascading old 2G equipment from urban areas that received 3G upgrades, so this is not at all uncommon.)  In general, wireless links change rate rapidly and unpredictably in reaction to propagation conditions as the handset moves (or, for fixed stations, as the weather changes), and the ratio of possible link rates is even more severe than the ADSL example above.
    
    Often a "poverty package" is implemented through a shaper rather than a policer, backed by a dumb FIFO on which no right-sizing has been considered (even though the link rate is obviously known in advance).  On one of these, I have personally experienced 40+ seconds of delay, rendering the connection useless for other purposes while any sort of sustained download was in progress.  In fact, that's one of the incidents which got me seriously interested in congestion control; at the time, I hacked the Linux TCP stack to right-size the receive window, and directed most of my traffic through a proxy running on that machine.  This was sufficient to restore some usability.
    
    I find it notable that ISPs mostly consider only policers for congestion signalling, and rarely deploy even these to all the places where congestion may reasonably occur.
    
     - Jonathan Morton
    
    


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [Bloat] datapoint from one vendor regarding bloat
  2019-04-11 18:28     ` Jonathan Morton
  2019-04-11 23:56       ` Holland, Jake
@ 2019-04-12  9:47       ` Mikael Abrahamsson
  1 sibling, 0 replies; 12+ messages in thread
From: Mikael Abrahamsson @ 2019-04-12  9:47 UTC (permalink / raw)
  To: Jonathan Morton; +Cc: Holland, Jake, bloat

On Thu, 11 Apr 2019, Jonathan Morton wrote:

> So what this boils down to is a two-stage TBF policer.  From idle, such 
> a system will let a burst of traffic through unfiltered, then start 
> dropping once the bucket is empty; the bucket is refilled at some 
> configured rate.  The two-stage system allows implementation of 
> "PowerBoost" style policies.

I'm not so sure:

"Buffering (Enqueuing) Once a packet is assigned to a certain forwarding 
class, it will try to get a buffer in order to be enqueued. Whether the 
packet can get a buffer is determined by the instantaneous buffer 
utilization and several attributes of the queue (such as Maximum Burst 
Size (MBS), Committed Burst Size (CBS) and high-prio-only) that will be 
discussed in more detail later in this chapter. If a packet cannot get a 
buffer for whatever reason, the packet will get dropped immediately. "

The person I talked to yesterday insisted that they actually did 10ms of 
*buffering* *bidirectionally*, because we specifically discussed policing 
and buffering and the difference.

I have access to one of their devices in our lab, I'm going to do testing 
of this in the next few weeks so I'll know for sure by then.

-- 
Mikael Abrahamsson    email: swmike@swm.pp.se

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-04-12  9:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-11 10:38 [Bloat] datapoint from one vendor regarding bloat Mikael Abrahamsson
2019-04-11 12:45 ` Sebastian Moeller
2019-04-11 17:33   ` Sebastian Moeller
2019-04-11 17:54 ` Jonathan Morton
2019-04-11 18:00   ` Holland, Jake
2019-04-11 18:28     ` Jonathan Morton
2019-04-11 23:56       ` Holland, Jake
2019-04-12  0:37         ` Jonathan Morton
2019-04-12  0:45           ` Holland, Jake
2019-04-12  9:47       ` Mikael Abrahamsson
2019-04-11 18:02   ` Jan Ceuleers
2019-04-11 18:27   ` Luca Muscariello

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox