"How can I accurately distinguish a constraint in supply from a reduction in demand?"

Serious answer: you look at the increase or decrease of customer tickets and complaints.

Practical example: a few years ago, we had two fixed wireless access networks, about 3km apart, each fed by an individual fiber local loop to our DC. I had the (in hindsight terrible) idea to back each network's fiber with the other, using two wireless point-to-point hops with a tower in the middle, which also had customers served by it.

When one fiber went down, the traffic from the tower in the middle had to still travel to its own fiber POP (as the router doing the routing was there) then trombone back over the same wireless link, across the middle tower again, and onwards to the functioning fiber. The net effect was a serious constraint on supply.

The result was a deluge of complaints which caused us to discontinue this approach, and which that taught me two lessons: your customers vote with their wallet, will tell you when you are doing a bad job, and you better listen. 

The second lesson is that customers prefer no service at all, to a severely degraded service. It's better to know "I can read a book for two hours while they fix the fiber" than to have a randomized poor quality of service.

Footnote: we don't experiment with customers, so we never managed to run experiments to test the tolerance to degraded service, and where the line between "OK(ish)" and "unsuable" lies, in terms of demand vs supply ratio. Effectively, if we contend 4:1, and we suddenly failed over to a link that's contended 40:1, is that acceptable? Is 100:1? It'd be great to hear other's experiences and ideas.

Best,

Mike
On Mar 13, 2023 at 17:09 +0100, Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net>, wrote:


On Mar 13, 2023, at 17:04, Dave Taht <dave.taht@gmail.com> wrote:

On Mon, Mar 13, 2023 at 8:08 AM Jeremy Austin via Rpm
<rpm@lists.bufferbloat.net> wrote:



On Mon, Mar 13, 2023 at 3:02 AM Sebastian Moeller via Starlink <starlink@lists.bufferbloat.net> wrote:

Hi Dan,


On Jan 9, 2023, at 20:56, dan via Rpm <rpm@lists.bufferbloat.net> wrote:

You don't need to generate the traffic on a link to measure how
much traffic a link can handle.

[SM] OK, I will bite, how do you measure achievable throughput without actually generating it? Packet-pair techniques are notoriously imprecise and have funny failure modes.


I am also looking forward to the full answer to this question. While one can infer when a link is saturated by mapping network topology onto latency sampling, it can have on the order of 30% error, given that there are multiple causes of increased latency beyond proximal congestion.

A question I commonly ask network engineers or academics is "How can I accurately distinguish a constraint in supply from a reduction in demand?"

This is an insanely good point. In looking over the wisp
configurations I have to date, many are using SFQ which has a default
packet limit of 128 packets. Many are using SFQ with a *even shorter*
packet limit, which looks good on speedtests which open many flows
(keown's sqrt(flows) for bdp), but is *lousy* for allowing a single
flow to achieve full rate (the more common case for end-user QoE).

I have in general tried to get mikrotik folk at least, to switch away
from fifos, red, and sfq to wards fq_codel or cake at the defaults
(good to 10Gbit) in part, due to this.

I think SFQ 128 really starts tapping out on most networks at around
the 200Mbit level, and above 400, really, really does not have enough
queue, so the net result is that wisps attempting to provide higher
levels of service are not actually providing it in the real world, an
accidental constraint in supply.

I have a blog piece, long in draft, called "underbloat", talking to
this. Also I have no seen multiple fiber installs that had had a
reasonable 50ms FIFO buffer for 100Mbit, but when upgraded to 1gbit,
left it at 5ms, which has bad sideffects for all traffic.

To me it looks also that at least some ubnt radios are FQd and underbuffered.

This is why I tend to describe bufferbloat as a problem of over-sized and under-managed buffers hoping to imply that reducing the buffersize is not the only or even best remedy here. Once proberly managed large buffers do no harm (except wasting memory for most of the time, but since that buys some resilience that is not that bad).

Regards
Sebastian

P.S.: This is a bit of a pendulum thing where one simplistic "solution" too-large-buffers gets replaced with another simplistic solution too-small-buffers ;)




--
--
Jeremy Austin
Sr. Product Manager
Preseem | Aterlo Networks
preseem.com

Book a Call: https://app.hubspot.com/meetings/jeremy548
Phone: 1-833-733-7336 x718
Email: jeremy@preseem.com

Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
_______________________________________________
Rpm mailing list
Rpm@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/rpm



--
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink