From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 82B133B29D; Tue, 11 Oct 2022 03:15:28 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1665472524; bh=Xug7d9ci45eFosgiAW2lJ9PtZnQjW4gqsBtFPoUM7e8=; h=X-UI-Sender-Class:Date:From:To:CC:Subject:In-Reply-To:References; b=lX+QWrK2nHXLqcHCO6O2mTX6FxVXTHONH54sDF4Xca4YPDiEdeIdwaaNjnmVZk/B4 mLS/mClycitAX5KlrI/OWEzIh+QtyPiyL+/zt21gOZCDpMenzMozXccyomIn1JN0/Y cETTxlOAt093tJxOegQ+ilWSiPKOKeIirmFZWhF4= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [127.0.0.1] ([134.76.241.253]) by mail.gmx.net (mrgmx005 [212.227.17.190]) with ESMTPSA (Nemesis) id 1N3KTo-1p9LNx1vGc-010NqJ; Tue, 11 Oct 2022 09:15:23 +0200 Date: Tue, 11 Oct 2022 09:15:20 +0200 From: Sebastian Moeller To: Bob McMahon , David Lang CC: Rpm , Make-Wifi-fast , Cake List , Taraldsen Erik , bloat User-Agent: K-9 Mail for Android In-Reply-To: References: <72674884-9E0D-4645-B5F5-C593CC45A8F0@gmx.de> Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Provags-ID: V03:K1:OzJjIM1/6qL2vOEAha+3jNgvFJS73TRm7eLGrega8IfbKi3++wM NNzmnmwnlIEnDFRz56sFUzdz99BqGoeSWyQeUQ1SEhhP2uVo14QoSYQhMcNZ4p+C05NapjI JSccPpYhHhE5KtXAwIF89sZThLEJ/WPAIUHmOgBWBizezNo4Z9AxSUTXLbunNkxw1KQB8ZJ SdtL5l5BeI5VxZe8iBdbQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:N2OfEHJXDPs=:FSMcVoaK9CEL9Nm6wdxhBH RhifiZcsf/wv1f/f/TNrLyT5lexLfu+J1d7OaMV5ktt51R2bdlguiDh+mRk2k/5kHl9FnGeSu ORcDqRd0fk6a2EuKifEw9U49CCrtbFkDsxTRl9CpTH2FNQHAduTh9xWsshGYINVC1a4NProJr D3gS2alRl7Z4qlYoHc5j+20gLPmVHk7cuW1mTeNDliSjp4nWkPOCiMVIFmbpdaFO+s4lGzJA3 aakCt/UNwF9QmcTMwFGm3y0xaHXhaexLAJDQ9TPTzCymVvfFX5QUdmYiZEcSyd7WLtRdT8wq1 I7fftXactBJ9KZJqhQclaw1toK2e9EPxV07ogQkbq0zyf9iESx671eic+HIMpAtLb6xKIosYb bkZDz/3YJH8kg8q/bl8a5XnegoBz5yscDHJ7POeFLVrgZa2KLHQULnHFzyLroJ2Qt5zV8BTa8 mxgdAeq/YhBM0KT0MxnuoreQG84QOCfoGrV80A/pztaN5syv4atjPFqSs5odFQfZ/aQWt5Fz0 p0vmj1SqaZlMvbnIgZBgOTQKm1a9IvrjtXtpWkfFaUACJZ0Jgr2n8kgX6ve439rI2/D0IivJ0 OqjAKJ8qMEYNB286ht6fezMa5WsLi14IrnHgnbUVqJWjuGws1SjdMKEyaxy3PfHAoghamrZUJ VaKHV+/N0uCb5dbBO62ucYptftSNjSCOHaoHQxCztuwYJalDR8ZhW2o/OaniW683ojqwvxuSC ULmTtVCArK7Qyy1F9XlDXO3hUAg3e5bph2lF6j+mSHRwUS4mWKmWWl+YSAfl2ItkwwelahAZx rkOac8NGracBiCi6ZhJuQvmWDhmZaLkYoSfxV9I0sq5aNrGzNR0+bQ7LOmRqXI98i6LJ/b3Rg 1ZBsphY6aiqBfEbMUIu7jyPw6ZxqA5c59D7j3iknDtWdFR85CtcRj7ed5ij0NSFyCj/34xY9a nQdh/dr83jV6zhYLxh5lypB6uC9awTvnZPKSD6hPeOIAfMujEBuG9KC61a2tl4K5CbxVLKpoI X0Gr2NYijSxKkm+Y6I9WMB+1rV83E7SlxUU4Vyngt1VV04DbjDYCBv532RBgr3bMTyn2BBgm+ AJ6h4ldssjB0xpRNizqbTuFiTOBGGaW37Qvwn/+brKuO1bQ/tNXDg+8Hc+ToHMa8hc3bFk3GT ZH72Zj+vQFTt+X39/S0fT3WKGXtb5MFmMAQTtTGyouJAmeLaXI8xvszF3YAJqcUswAKiMN95Q fWtqFmpRbM3EYKSqCLhN/J3P8mbQcrwZvqKGSo+Fstf64hFqNpzkHs2CdmctMDvYsqtUQvXhT GY81Lq/ENGJ8kw622RHFCGe9tT3+z+63WnD72vmQ0fXg7xQP9AjPgQian6HmzrPzmqKzWWoHg vHvOT8VAjiUAQ8HxOXA+2eqMF42AcvtZhsSpJH1oEZ3rC+U4wpz3ioUKLpCtetxC5P2ddVDGb 42cL657B+JdyTyflUz8P7x42LHADqnPnBMvfGT2EUCu+v2vQ+LwUvn1FpBGqDvYmyz8OjfleX mTrXVxCkKJznQ5RYjHSiE4FYu/2BPsySL3hQ9QXiM7NG3 Subject: Re: [Make-wifi-fast] [Bloat] [Cake] The most wonderful video ever about bufferbloat X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 11 Oct 2022 07:15:29 -0000 Hi Bob, On 11 October 2022 02:05:40 CEST, Bob McMahon wrote: >It's too big because it's oversized so it's in the size domain=2E It's >basically Little's law's value for the number of items in a queue=2E > >*Number of items in the system =3D (the rate items enter and leave the >system) x (the average amount of time items spend in the system)* > > >Which gets driven to the standing queue size when the arrival rate >exceeds the service rate - so the driving factor isn't the service and >arrival rates, but *the queue size *when *any service rate is less than a= n >arrival rate=2E* [SM] You could also argue it is the ratio of arrival to service rates, wit= h the queue size being a measure correlating with how long the system will = tolerate ratios larger than one=2E=2E=2E > >In other words, one can find and measure bloat regardless of the >enter/leave rates (as long as the leave rate is too slow) and the value o= f >memory units found will always be the same=2E > >Things like prioritizations to jump the line are somewhat of hacks at >reducing the service time for a specialized class of packets but nobody >really knows which packets should jump=2E=20 [SM] Au contraire most everybody 'knows' it is their packets that should j= ump ahead of the rest ;) For intermediate hop queues however that endpoint = perception is not really actionable due to lack of robust and reliable impo= rtance identifiers on packets=2E In side a 'domain' dscps might work if tre= ated to strict admission control, but that typically will not help end2end = traffic over the internet=2E This is BTW why I think FQ is a great concept,= as it mostly results in the desirable outcome of not picking winners and l= osers (like arbitrarily starving a flow), but I digress=2E >Also, nobody can define what >working conditions are so that's another problem with this class of tests= =2E [SM] While real working conditions will be different for each link and pro= bably vary over time, it seems achievable to come up with a set of pessimis= tic assumptions how to model a challenging work condition against which to = test potential remedies, assuming that such remedies will also work well un= der less challenging conditions, no? > >Better maybe just to shrink the queue and eliminate all unneeded queueing >delays=2E=20 [SM] The 'unneeded' does a lot of work in that sentence ;)=2E I like Van's= ? Description of queues as shock absorbers so queue size will have a lower = acceptable limit assuming users want to achieve 'acceptable' throughput eve= n with existing bursty senders=2E (Not all applications are suited for paci= ng so some level of burstiness seems unavoidable)=2E > Also, measure the performance per "user conditions" which is going >to be different for almost every environment (and is correlated to time a= nd >space=2E) So any engineering solution is fundamentally suboptimal=2E=20 [SM] A matter of definition, if the requirement is to cover many user cond= itions the optimality measure simply needs to be changed from per individua= l condition to over many/all conditions, no? >Even >pacing the source doesn't necessarily do the right thing because that's >like waiting in the waitlist while at home vs the restaurant lobby=2E=20 [SM] +1=2E > Few >care about where messages wait (unless the pitch is AQM is the only >solution that drives to a self-fulfilling prophecy - that's why the tests >have to come up with artificial conditions that can't be simply defined= =2E) Hrm, so the RRUL test, while not the end all of bufferbloat/working condit= ions tests, is not that complicated: Saturate a link in both directions simultaneously with multiple greedy flo= ws while measuring load-dependent latency changes for small isochronous pro= be flows=2E Yes, the it would be nice to have additional higher rate probe flows also = bursty ones to emulate on-linev games, and 'pumped' greedy flows to emulate= DASH 'streaming', and a horde of small greedy flows that mostly end inside= the initial window and slow start=2E But at its core existing RRUL already= gives a useful estimate on how a link behaves under saturating loads all t= he while being relatively simple=2E The responsiveness under working condition seems similar in that it tries = to saturate a link with an increasing number of greedy flows, in a sense to= create a reasonable bad case that ideally rarely happens=2E Regards Sebastian > >Bob > >On Mon, Oct 10, 2022 at 3:57 PM David Lang wrote: > >> On Mon, 10 Oct 2022, Bob McMahon via Bloat wrote: >> >> > I think conflating bufferbloat with latency misses the subtle point i= n >> that >> > bufferbloat is a measurement in memory units more than a measurement = in >> > time units=2E The first design flaw is a queue that is too big=2E Thi= s >> youtube >> > video analogy doesn't help one understand this important point=2E >> >> but the queue is only too big because of the time it takes to empty the >> queue, >> which puts us back into the time domain=2E >> >> David Lang >> >> > Another subtle point is that the video assumes AQM as the only soluti= on >> and >> > ignores others, i=2Ee=2E pacing at the source(s) and/or faster servic= e >> rates=2E A >> > restaurant that let's one call ahead to put their name on the waitlis= t >> > doesn't change the wait time=2E Just because a transport layer slowed= down >> > and hasn't congested a downstream queue doesn't mean the e2e latency >> > performance will meet the gaming needs as an example=2E The delay is = still >> > there it's just not manifesting itself in a shared queue that may or = may >> > not negatively impact others using that shared queue=2E >> > >> > Bob >> > >> > >> > >> > On Mon, Oct 10, 2022 at 2:40 AM Sebastian Moeller via Make-wifi-fast = < >> > make-wifi-fast@lists=2Ebufferbloat=2Enet> wrote: >> > >> >> Hi Erik, >> >> >> >> >> >>> On Oct 10, 2022, at 11:32, Taraldsen Erik >> >> wrote: >> >>> >> >>> On 10/10/2022, 11:09, "Sebastian Moeller" wrote= : >> >>> >> >>> Nice! >> >>> >> >>>> On Oct 10, 2022, at 07:52, Taraldsen Erik via Cake < >> >> cake@lists=2Ebufferbloat=2Enet> wrote: >> >>>> >> >>>> It took about 3 hours from the video was release before we got the >> >> first request to have SQM on the CPE's we manage as a ISP=2E Final= ly >> >> getting some customer response on the issue=2E >> >>> >> >>> [SM] Will you be able to bump these requests to higher-ups an= d at >> >> least change some perception of customer demand for tighter latency >> >> performance? >> >>> >> >>> That would be the hope=2E >> >> >> >> [SM} Excellent, hope this plays out as we wish for=2E >> >> >> >> >> >>> We actually have fq_codel implemented on the two latest generation= s of >> >> DSL routers=2E Use sync rate as input to set the rate=2E Works qui= te well=2E >> >> >> >> [SM] Cool, if I might ask what fraction of the sync are you >> >> setting the traffic shaper for and are you doing fine grained overhe= ad >> >> accounting (or simply fold that into a grand "de-rating"-factor)? >> >> >> >> >> >>> There is also a bit of traction around speedtest=2Enet's inclusion = of >> >> latency under load internally=2E >> >> >> >> [SM] Yes, although IIUC they are reporting the interquartile >> mean >> >> for the two loaded latency estimates, which is pretty conservative a= nd >> only >> >> really "triggers" for massive consistently elevated latency; so I ex= pect >> >> this to be great for detecting really bad cases, but I fear it is to= o >> >> conservative and will make a number of problematic links look OK=2E = But >> hey, >> >> even that is leaps and bounds better than the old only idle latency >> report=2E >> >> >> >> >> >>> My hope is that some publication in Norway will pick up on that sco= re >> >> and do a test and get some mainstream publicity with the results=2E >> >> >> >> [SM] Inside the EU the challenge is to get national regulato= rs >> and >> >> the BEREC to start bothering about latency-under-load at all, "some >> >> mainstream publicity" would probably help here as well=2E >> >> >> >> Regards >> >> Sebastian >> >> >> >> >> >>> >> >>> -Erik >> >>> >> >>> >> >>> >> >> >> >> _______________________________________________ >> >> Make-wifi-fast mailing list >> >> Make-wifi-fast@lists=2Ebufferbloat=2Enet >> >> https://lists=2Ebufferbloat=2Enet/listinfo/make-wifi-fast >> > >> >_______________________________________________ >> Bloat mailing list >> Bloat@lists=2Ebufferbloat=2Enet >> https://lists=2Ebufferbloat=2Enet/listinfo/bloat >> > --=20 Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E