From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id B75A921F567 for ; Mon, 2 Nov 2015 10:49:13 -0800 (PST) Received: from hms-beagle.home.lan ([217.237.68.126]) by mail.gmx.com (mrgmx103) with ESMTPSA (Nemesis) id 0Ld0E0-1aJYEG2j44-00iA1L; Mon, 02 Nov 2015 19:49:08 +0100 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Mon, 2 Nov 2015 19:49:08 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: References: <2D22BEFF-B6B6-4348-B83E-3AB76F933BDF@gmx.de> To: Alan Jenkins X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:hdT6pqFMal+W2kuN8bddjTc5OMvRQbyDGDI8HutsmFBsrWe7NBY QgldL9Q7kZNPG9tzKv2CC/M0CMUNOP6p3ZbymZzIG/Pa+LdD2rUXD7rdNAEfpWb2QOSmnp1 MrgNMH2tPNSPKwAdAuWzOvZDR3A12Q/KoLKxXMlmMZauVi26nLgFgeM6B4C1oORZajtlqc+ smoPRyqQzHHz6ygovuCew== X-UI-Out-Filterresults: notjunk:1;V01:K0:PrJNALZ5wLA=:sfxc2uBtzU31BCLMUjkvLw YGHyEx+zq3YkzAox6WcsloBEbjpj0h/TXPchttaDtc532gJ0Ft5PZln1Jw3KM+SCorC3D5pz+ oON3ayzKNd7mrV/JS9vEImHSZuRUZe7Ov8YHzZ1TLFg1SAZm/LIz1z1oy6aoyZO3ebTgzEcyW Z+c0KJdZtG1XN83EiiflJIJulRADCqjHE/snjRMm0l9CMMKcrwNY5U4+7wuryNemgs+ZGD0K3 +cKvTtpyDtiGcE8S73GTtHS3c4HWKkQPJ2bjXBgqBp6f/UklyvsrIE5h3q/qeefEWZ5pGIRTZ WX0q3HcmK734yX2fTJLIgxYu7Ezn8/W/8ubpCqnnIn+iZwHmyeuZXr8wz0/nh0GbFneq1knJy Z+4tppvmUk0AS1Kh5mKTGtpGe7HzKaWGDERuJVwK89PXrTpicQ5kFowCGsC/O/THUZbN61rNS ZKKWHrEF2fKpLfBvfGJxjTF713Gs4wPwQMPSjqcYk8+tiXzeYn/nz3dyKUtvP5iX7/fQt3tF9 Wj8A4cH2KrdCW1bdptJTPoolQR+PW8uacdRGrDcxZIV8BktkOK7JVM/yTBOi4XJZRhsu1xh0l qnga8YqFkKXSdp6xv2omr8w08nwmp3g7PfAbYKnU1t+tGQcq/OeRHiBoEuX3iZZG6Wv+pd/V+ Kml7Fd6TEv8XUlGPJsJuFPRdS2jiJMzApiZ01rd6F6bfPj9bxmhkHM9SUHrNyg9pGlQTD6+5D R7v8C90NwWZMwFCrvJzKjj65hjf3eEOHkxtYY7lKxXkFF8fQZH+r2EBoaS/YX0mI03B7ehMPZ o07ugKQ Cc: "cake@lists.bufferbloat.net" Subject: Re: [Cake] cake target corner cases? X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 02 Nov 2015 18:49:36 -0000 Hi Alan, On Nov 2, 2015, at 17:20 , Alan Jenkins = wrote: > On 02/11/2015, Sebastian Moeller wrote: >> Hi Alan, >>=20 >> On Nov 2, 2015, at 01:20 , Alan Jenkins = >> wrote: >>=20 >>> I thought SQM works fine with mtu * 1.0. But cake has been like = this >>> since at least April, and I didn't notice an extra 7ms when I tried >>> it. Confusing. >>=20 >> Well, sqm-scripts does take an MTU of 1540 bytes, which is less = than ideal >> because for ATM encapsulation the on wire size is actually 32 or 32 * = 53, so >> the worst case should be 33*53 =3D 1749 bytes, but in reality the = actual limit >> itself only needs to be approximately right.. That said, I guess I = fixed up >> sqm-scripts right now (by making target large enough for one 1749 = byte >> packet, I would be delighted if you could go measure whether this = changes >> things, before increasing auto-target to 1.5 worst case = on-wire-packets, and >> also scaling interval so that target <=3D 0.1*interval stays true)... >=20 > Ok. Actually I wouldn't have measured +7ms because I wasn't testing > cake flowblind, aka codel. (And I was looking at ping under load, not > tc -s qdisc, or the tcp rtf=92s). Yes, you looked at the real data, the set-points is what I hoped = to use to affect the real latency under load. >=20 > The increased target doesn't increase my bandwidth. =20 I believe we are talking changes so small it should not matter = one way or the other, but then it is nice to be at least consistent with = one=92s own theory ;) > Actually I'm > failing to measure any changes at all, even for mtu*1.5. I'm sure > there's a real effect on buffering, if I had a less noisy way to > measure it :(. I couldn't work out exactly what you wanted testing; > if there's something more specific I could probably spend more time on > it. This is pretty much what I expected; the only interesting = measurement left then is to disable this completely and see how a to = small target affects latency under load=85 (I just wonder if this might = be better measured with larger probe packets, say full MTU ICMP packets = instead of small ones?) >=20 > I agree with the reasoning for 1749 bytes. If you think mtu*1.5 is a > good idea for SQM, I can't follow the logic, but it wouldn't bother me > for fq_codel. =20 Oh, the 1.5 * 1749 is really just to be on the safe side, but = until this setting makes a dent in at least one measurement, I guess I = am not going to bother increasing this further; I will try to play with = keeping interval at 20 to 10 times target though... > I'm not really worried about cake, because I don't > understand it. cake flowblind with mtu*1.5 hard-coded sounds less > good, but I don't have any reason to use it in that mode :). >=20 >=20 >>> b->cparams.interval =3D max(rtt_est_ns + >>> b->cparams.target - ns_target, >>> b->cparams.target * 8); >>=20 >> I sort of cheated, I knew this was in the code, but I wanted to = highlight >> that we need a better description of what cake does to not confuse = future >> users, sorry for that. >=20 > I won't complain, I'm probably the one being obstructive here. Not at all; you are right with your analysis and I should have = been less circumvent... >=20 >=20 >>> Maybe this keeps the target/interval ratio from going too far over = the >>> recommended 10%. Though that's not what CoDel says to do. >>>=20 >>> In the CoDel draft you just don't drop below 1 MTU. You don't = adjust >>> target for the rate. There's no suggestion to increase interval >>> either. It makes sense that long transmission times add to the >>> expected rtt (SQM does this too). >>=20 >> I know, I believe that K. Nichols recommended to increase = interval, and I >> opted for the simplest additive increase (as effectively the expected = RTTs >> increase by the amount of transfer time, and for asymmetric links we = can >> ignore the faster direction mostly). >> Looking at the rationale in the codel RFC I now believe that = scaling would >> be the right thing=85 in theory smaller target better preserves = responsiveness >> at a slightly higher bandwidth sacrifice, and with slow links I = believe >> responsiveness is more important; but then I wonder what is more = responsive, >> a higher target ratio with a (smaller) estimated reaction time (in = first >> approximation interval gives the time we allow sender and receiver to = react >> to our signaling) or the theoretical better target ratio with an >> artificially increased interval. I guess that needs data... >=20 > Data always good. AIUI the ratio is about how low you can set target, > before the bandwidth reduction becomes painful. Yes, but also target sort of correlates with the median expected = queuing delay under saturating load, so if target is too high latency = suffers, not exactly what codel aims to fix... >=20 > I can imagine expected rtt increases e.g. in Tin 4, when it's > de-prioritised because of exceeding the bandwidth threshold. I don't > see why it would increase by over 400ms, in the 1Mbit/s case you > showed. Well, I am not sure that I fully understand whether cake is = doing what it wants to do there in the first place, and I am also unsure = if cake achieves its goal why it does that. I would be delighted if = someone in the know could elucidate that for me=85 Best Regards Sebastian >=20 > Regards > Alan