[Bloat] [Starlink] [Rpm] [LibreQoS] [EXTERNAL] Re: Researchers Seeking Probe Volunteers in USA
Sebastian Moeller
moeller0 at gmx.de
Tue Mar 14 05:11:53 EDT 2023
Hi Dan,
> On Mar 13, 2023, at 22:27, dan <dandenson at gmail.com> wrote:
>
> I’m sticking to my guns on this, but am prepared to let this particular argument rest. The threads is approaching unreadable.
[SM] Sorry I have a tendency of simultaneously pushing multiple discussion threads instead of focussing on the most important...
>
> Let me throw something else out there. It would be very nice to have some standard packet type that was designed to be mangled by a traffic shaper. So you could initiate a speed test specifically to stress-test the link and then exchange a packet that the shaper would update both ways with all the stats you might want. Ie, speed test is getting 80Mbps but there’s an additional 20Mbps on-link so it should report to the user that 100M aggregate with the details broken out usably.
[SM] Yeah, that does not really work, the traffic shaper does not know that elusive capacity either... on some link technologies like ethernet something reportable might exist, but on variable rate links not so much. However it would be nice if traffic shapers cpuld be tickled to reveal their own primary configuration. As I answered to Dave in a different post we are talking about 4 numbers per shaper instance here:
gross shaper rate, per-packet-overhead, mpu (minimum packet unit), link-layer specific accpunting (e.g. 48/53 encapsulation for ATM/AAL5)
The first two are clearly shaper specific and I expect all competent shapers to use these; mpu is more a protocol issue (e.g. all link layers sending partial ethernet frames with frame check sequence inherit ethernets minimal packet size of 64 byte plus overhead.
For my own shaper I know these already, but my ISPs shapers are pretty opaque to me, so being able to query these would be great. (BTW, for speedtests in dispues with my ISP, I disable my traffic shaper obviously that capacity loss is not in their responsibility).
> Could also report to that speed test client and server things like latency over the last x minutes along
[SM] A normal shaper does not know this... and even cake/fq_codel that measure sojourn time per packet and have a decent idea about a packets flow-identity (not perfect as there is a limited number of hash buckets). It does not report anything useful in regards to "average" sojourn time for the packets in the measurement flows... (it would need to know when to start and when to stop at the very least). Honestly this looks more like a post-hoc job to be performed on packet captures than an on-line thing expected from a traffic/shaper/AQM.
> with throughput so again, could be charted out to show the ‘good put’ and similar numbers.
[SM] Sorry to sound contrarien, but goodput is IMHO a number quite relevant to end-users, so that speed tests report an estimate of that number is A-OK with me, but I also understand that speedtest can not report the veridical gross bottleneck capacity in all cases anyway, due to lack of important information.
> Basically, provide the end user with decently accurate data that includes what the speed test app wasn’t able to see itself.
[SM] Red herring IMHO, speedtests have clear issues and problems, the fact that they do not measure 100% of packets traversing a link is not one of them IMHO, they mostly are close enough to the theoretical numbers that differences can simply be ignored... as I said my ISP provisions a gross DSL sync of 116.7 Mbps but contractually only asserts a maximum of 100 Mbps goodput (over IPv6), the math for this works well and actually my ISP tends to over-fulfil the contractual rate in that I get ~105 Mbps of the 100 my contract promises...
Sure personally I am more interested in the actuall gross rate my ISP sets its traffic shapers for my link too, but generally hitting the contract numbers is not rocket science if one is careful which rate to promise... ;)
> It could also insert useful data around how many packets arrived that the speed test app(s) could use to determine if there are issues on wan or lan.
[SM] I think speedtests should report: number of parallel flows, total number of packets, total number of bytes transferred, number of retransmits, and finally MTU and more importantly for TCP MSS (or average packet size, but that is much harder to get at with TCP). AND latency under load ;)
>
> I say mangle here because many traffic shapers are transparent so the speed test app itself doesn’t really have a way to ask the shaper directly.
[SM] I am pretty sue that is not going to come... as this smells like a gross layering violation, unless one comes up with some IP extension header to add that contains that information. Having intermediary nodes write into payload area of packets, is frowned upon in the IETF IIRC...
>
> My point in all of this is that if you’re giving the end user information, it should be right. No information is better than false information.
[SM] A normal speedtest is not actually wrong, just because it is not 100% precise and accurate. At the current time users operating traffic shaper can be expected to turn that off during an official speedtest. If a user wanted to cheat and artificially lower their achieved rates there is way more bang for you buck in either forcing IP fragmentation or using MSS clamping to cause the speedtest to use smaller packets. This is not only due to the higher overhead fraction for smaller packets, but simply because in my admittedly limited experience few CPE seem prepared for the PPS processing required for dealing with a saturating flow of small packets.
However cheating in official tests is neither permitted nor to be expected (most humans act honestly). Between business partners like ISP and customer there should be an initial assumption of good will in either direction, no?
> End users will call their ISP or worse get on social media and trash them because they bought a $29 netgear at Walmart that is terrible.
[SM] Maybe, but unlikely ot affect the reutation of an ISP unless that is not a rare exception but the rule.... think about reading e.g. amazon 1 star reviews, some read like a genuine faulty product and sone clearly show the writer had no clue... same is true for social media posts unless you happen to be in the center of a veritable shit storm a decent ISP should be able to shrug off a few negative comments, no?
Over here from looking at ISPs forum, the issue is often reversed, genuine problem reports are rejected because end-users did not use the ISP supplied router/modem... (and admittedly that can cause problems, but these problems are not guaranteed).
>
> After all the entire point if all of this is end-user experience. The only benefit to ISPs is that happy users are good for business.
[SM] A customer that can confirm and see that what their ISP promised is what the ISP actually and delivers likely to feel validated for selecting that ISP. As a rule happy customers tend to stick...
> A lot of the data that can be collected at various points along the path are better for ISPs to use to update their networks to improve user experience, but aren’t so useful to the 99% of users that just want low ‘lag’ on their games and no buffering.
>
>
>
>
> On Mar 13, 2023 at 3:00:23 PM, Sebastian Moeller <moeller0 at gmx.de> wrote:
>> Hi Jeremy,
>>
>>> On Mar 13, 2023, at 20:52, Jeremy Austin <jeremy at aterlo.com> wrote:
>>>
>>>
>>>
>>> On Mon, Mar 13, 2023 at 12:34 PM dan <dandenson at gmail.com> wrote:
>>>
>>> See, you're coming around. Cake is autorating (or very close, 'on
>>> device') at the wan port. not the speed test device or software. And
>>> the accurate data is collected by cake, not the speed test tool. That
>>> tool is reporting false information because it must, it doesn't know
>>> the other consumers on the network. It's 'truest' when the network is
>>> quiet but the more talkers the more the tool lies.
>>>
>>> cake, the kernel, and the wan port all have real info, the speed test
>>> tool does not.
>>>
>>> I'm running a bit behind on commenting on the thread (apologies, more later) but I point you back at my statement about NTIA (and, to a certain extent, the FCC):
>>>
>>> Consumers use speed tests to qualify their connection.
>>
>> [SM] And rightly so... this put a nice stop to the perverse practice of selling contracts stating (up to) 100 Mbps for links that never could reach that capacity ever, now an ISP is careful in what they promise... Speedtest (especially using the official speedtest app that tries to make users pay attention to a number of important points, e.g. not over WiFi, but over an ethernet port that has a capacity above the contracted speed) seem to be good enough for that purpose. Really over here that is the law and ISP still are doing fine and we are taking low single digit thousands of complaints in a market with ~40 million households.
>>
>>>
>>> Whether AQM is applied or not, a speed test does not reflect in all circumstances the capacity of the pipe. One might argue that it seldom reflects it.
>>
>> [SM] But one would be wrong, at least the official speedtests over here are pretty reliable, but they seem to be competenyly managed. E.g. users need to put in the contracted speed (drop down boxes to the select ISP and contract name) and the test infrastructure will only start the test if it managed to reserver sufficient capacity of the test servers to reliably saturate the contracted rate. This is a bit of engineering and not witchcraft, really. ;)
>>
>>> Unfortunately, those who have "real info", to use Dan's term, are currently nearly powerless to use it. I am, if possible, on both the ISP and consumer side here.
>>
>> [SM] If you are talking about speedtests being systemicly wrong in getting usabe capacity estimates I disagree, if your point is that a sole focus on this measure is missing the way more important point od keeping latency under load limited, I fully agree. That point currently is lost on the national regulator over here as well.
>>
>>> And yes, Preseem does have an iron in this fire, or at least a dog in this fight.
>>
>> [SM] Go team!
>>
>>> Ironically, the FCC testing for CAF/RDOF actually *does* take interface load into account, only tests during peak busy hours, and /then/ does a speed test. But NTIA largely ignores that for BEAD.
>>
>> [SM] I admit that I have not looked deeply into these different test methods, and will shut up about this topic until I did to avoid wasting your time.
>>
>> Regards
>> Sebastian
>>
>>
>>>
>>> --
>>> --
>>> Jeremy Austin
>>> Sr. Product Manager
>>> Preseem | Aterlo Networks
>>> preseem.com
>>>
>>> Book a Call: https://app.hubspot.com/meetings/jeremy548
>>> Phone: 1-833-733-7336 x718
>>> Email: jeremy at preseem.com
>>>
>>> Stay Connected with Newsletters & More: https://preseem.com/stay-connected/
>>
More information about the Bloat
mailing list