A TCP 3WHS may be a better test. It's supported in iperf 2. ⁣A much faster tcp connect() is also a differentiator between wired OSP vs FWA. The TCP_FAST_OPEN (TFO) and the setup aspects of QUIC show that fast early state exchanges are being engineered in, what I consider, non ideal manners. These setup optimizations remind of the PSTN and circuit setups. It seems best the make them low cost and high speed. Not testing tcp connect() in a robust manner seems a fundamental industry escape. Bob On Nov 19, 2023, 3:04 AM, at 3:04 AM, le berger des photons via Nnagain wrote: >but you can see if it's doing what you want it to and you can compare >it to >other products in the same space. > >On Fri, Nov 17, 2023 at 9:31 PM Jack Haverty via Nnagain < >nnagain@lists.bufferbloat.net> wrote: > >> On 11/17/23 11:27, Dave Taht via Nnagain wrote: >> >> one of the things we really wished existed was a standardized way to >> test latency and throughput to routers. It would be super helpful if >> there was a standard in consumer routers that allowed users to both >ping >> and fetch 0kB fils from their routers, and also run download/upload >> tests. >> >> >> Back when I was involved in operating a network, we tried to track >latency >> and throughput by standard ping and related tests. We discovered >that, in >> addition to the network conditions, the results were often dependent >on the >> particular equipment and software involved at the time. Some >companies >> treated ping traffic (e.g., anything directed to the "echo" port) as >low >> priority since it was obviously (to them) less important than any >other >> traffic. Others treated such traffic as high priority - it made >their >> results in review articles look better. >> >> In another case we discovered one brand of desktop computer was >achieved >> much higher throughputs over the net than similar products from other >> manufacturers. It took some serious technical investigation but we >> eventually discovered that the high throughput was achieved by >violating >> the Ethernet specification. The offending vendor didn't follow the >rules >> about timing. But their test results looked much better than the >> competition. >> >> IMHO the root of the problem is that you can not assume much about >what >> any software and hardware are doing. There are lots of specs, >standards, >> and mandates in RFCs or even governmental rules and regulations. But >> lacking any kind of testing or certification, it's difficult to tell >if >> those "standards" are actually being followed. If someone, technical >> organization or government regulator, declares or legislates some >protocol, >> algorithm, or behavior to be a required "standard", it should be >> accompanied by mechanisms and processes for testing to verify that >the >> standard is implemented correctly and is actually used, and >certification >> so that purchasers are informed. >> >> Jack Haverty >> _______________________________________________ >> Nnagain mailing list >> Nnagain@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/nnagain >> > > >------------------------------------------------------------------------ > >_______________________________________________ >Nnagain mailing list >Nnagain@lists.bufferbloat.net >https://lists.bufferbloat.net/listinfo/nnagain