[Bloat] UnderBloat on fiber and wisps

Dave Taht dave.taht at gmail.com
Mon Mar 13 12:04:01 EDT 2023


On Mon, Mar 13, 2023 at 8:08 AM Jeremy Austin via Rpm
<rpm at lists.bufferbloat.net> wrote:
>
>
>
> On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink at lists.bufferbloat.net> wrote:
>>
>> Hi Dan,
>>
>>
>> > On Jan 9, 2023, at 20:56, dan via Rpm <rpm at lists.bufferbloat.net> wrote:
>> >
>> >  You don't need to generate the traffic on a link to measure how
>> > much traffic a link can handle.
>>
>>         [SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.
>
>
> I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.
>
> A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in supply from a reduction in demand?"

This is an insanely good point. In looking over the wisp
configurations I have to date, many are using SFQ which has a default
packet limit of 128 packets. Many are using SFQ with a *even shorter*
packet limit, which looks good on speedtests which open many flows
(keown's sqrt(flows) for bdp), but is *lousy* for allowing a single
flow to achieve full rate (the more common case for end-user QoE).

I have in general tried to get mikrotik folk at least, to switch away
from fifos, red, and sfq to wards fq_codel or cake at the defaults
(good to 10Gbit) in part, due to this.

I think SFQ 128 really starts tapping out on most networks at around
the 200Mbit level, and above 400, really, really does not have enough
queue, so the net result is that wisps attempting to provide higher
levels of service are not actually providing it in the real world, an
accidental constraint in supply.

I have a blog piece, long in draft, called  "underbloat", talking to
this. Also I have no seen multiple fiber installs that had had a
reasonable 50ms FIFO buffer for 100Mbit, but when upgraded to 1gbit,
left it at 5ms, which has bad sideffects for all traffic.

To me it looks also that at least some ubnt radios are FQd and underbuffered.

> --
> --
> Jeremy Austin
> Sr. Product Manager
> Preseem | Aterlo Networks
> preseem.com
>
> Book a Call: https://app.hubspot.com/meetings/jeremy548
> Phone: 1-833-733-7336 x718
> Email: jeremy at preseem.com
>
> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
> _______________________________________________
> Rpm mailing list
> Rpm at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/rpm



-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC


More information about the Bloat mailing list