I think he understands that he's talking about page fetch time to an audience that won't believe that latency is like "latent fingerprints": stuff that hasn't shown (up) yet. Such folks annoy me (;-)) Note his quote usage in this: > *IMPORTANT NOTE*about the -c {concurrency} option: if you ask for -c > 10, each "page" will consist of 10 parallel fetches of URL, and the > "latency" will be the amount of time it takes to get the last bit from > the last concurrent child fetch. --dave On 2019-12-19 2:32 p.m., Dave Taht wrote: > I was not aware that jim salter had really gone to town on measuring > latency under load in the past year - notably the 4 stream 1024p + web > browsing torture test used here: > > https://arstechnica.com/gadgets/2019/11/ars-puts-googles-new-nest-wi-fi-to-the-test/?itm_source=parsely-api > https://arstechnica.com/gadgets/2019/12/amazons-inexpensive-eero-mesh-wi-fi-kit-is-shockingly-good/?comments=1 > > He considers under 500ms of browsing latency to be "good". Not > entirely sure how he's calculating that, I think he's measuring page > completion time rather than "latency" per se'. > > The tools he uses are here: > > https://github.com/jimsalterjrs/network-testing/blob/master/README.md > -- David Collier-Brown, | Always do right. This will gratify System Programmer and Author | some people and astonish the rest davecb@spamcop.net | -- Mark Twain