<html><head></head><body>I just noticed netalyzr shut down last month....<br><br><div class="gmail_quote">On April 11, 2019 2:45:19 PM GMT+02:00, Sebastian Moeller <moeller0@gmx.de> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Interesting! Thanks for sharing. I wonder what the netalyzr buffertest would report for these?<br><br>Best Regards<br> Sebastian<br><br><div class="gmail_quote">On April 11, 2019 12:38:54 PM GMT+02:00, Mikael Abrahamsson <swmike@swm.pp.se> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail"><br>Hi,<br><br>I talked to Nokia (former Alcatel/Lucent equipment) regarding their <br>typical buffer settings on BNG. I thought their answer might be relevant <br>as a data point for people to have when they do testing:<br><br><a href="https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf">https://infoproducts.alcatel-lucent.com/cgi-bin/dbaccessfilename.cgi/3HE13300AAAATQZZA01_V1_Advanced%20Configuration%20Guide%20for%207450%20ESS%207750%20SR%20and%207950%20XRS%20for%20Releases%20up%20to%2014.0.R7%20-%20Part%20II.pdf</a><br><br>"mbs and cbs — The mbs defines the MBS for the PIR bucket and the cbs <br>defines the CBS for the CIR bucket, both can be configured in bytes or <br>kilobytes. Note that the PIR MBS applies to high burst priority packets <br>(these are packets whose classification match criteria is configured with <br>priority high at the ingress and are in-profile packets at the egress). <br>Range: mbs=0 to 4194304 bytes; cbs=0 to 4194304 bytes Note: mbs=0 prevents <br>any traffic from being forwarded. Default: mbs=10ms of traffic or 64KB if <br>PIR=max; cbs=10ms of traffic or 64KB if CIR=max"<br><br>So the default setting is that they have a 10ms buffer and if a packet is <br>trying to be inserted into this buffer and it's 10ms full, then that <br>packet will instead be dropped.<br><br>They claimed most of their customers (ISPs) just went with this setting <br>and didn't change it.<br><br>Do we have a way to test this kind of setting from the outside, for <br>instance by sending a large chunk of data at wirespeed and then checking <br>the characteristics of the buffering/drop for this burst of packets at <br>receive side?<br></pre></blockquote></div></blockquote></div><br>-- <br>Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>