From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x22c.google.com (mail-oi0-x22c.google.com [IPv6:2607:f8b0:4003:c06::22c]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 065FB21F29A; Sun, 10 May 2015 11:25:06 -0700 (PDT) Received: by oign205 with SMTP id n205so90118334oig.2; Sun, 10 May 2015 11:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=qIU849a4hwR4IOLiAgaNvWm//A1yEdHyx1vXRGKi9w0=; b=KYz/IlZa6o/TG5utz0BGprdLRE4JkyzfOcnZpARR6K1apPXRh3+HK+GlNmlgosa81e +ISLkYlpvqpcFvbOrqVjjXc+M3FNYFmksAnaIMssLq8pGREOEiey/3dmg56nHEcPaVNy 08nNF5J7KyEnKhrnGMhW8u95nCCRvbKhGqAbKMuMv/98kX+3pGipQofrFuyZcFwE/0eV cfGBDNP0J7rKp9Bq2SpVIZTcO4MUC6U7mgCMKRcwDK8NW1PHtenaG5c/T9wz8Ju5fVCr g2ZdHCGaeAiBBk4LXID4hxIx/pvf8LUi92L002oC3yjnvg0PowGQCkI0w+ZfLpG55pc8 oZag== MIME-Version: 1.0 X-Received: by 10.182.138.68 with SMTP id qo4mr5515295obb.56.1431282303426; Sun, 10 May 2015 11:25:03 -0700 (PDT) Received: by 10.202.71.139 with HTTP; Sun, 10 May 2015 11:25:03 -0700 (PDT) In-Reply-To: References: Date: Sun, 10 May 2015 11:25:03 -0700 Message-ID: From: Dave Taht To: Sebastian Moeller Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: cake@lists.bufferbloat.net, Jonathan Morton , "codel@lists.bufferbloat.net" , bloat Subject: Re: [Codel] [Cake] Control theory and congestion control X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 10 May 2015 18:25:47 -0000 On Sun, May 10, 2015 at 10:48 AM, Dave Taht wrote: >>> A control theory-ish issue with codel is that it depends on an >>> arbitrary ideal (5ms) as a definition for "good queue", where "a >>> gooder queue=E2=80=9D >> >> I thought that our set point really is 5% of the estimated RTT, = and we just default to 5 sincere we guestimate our RTT to be 100ms. Not tha= t I complain, these two numbers seem to work decently over a relive broad r= ange of true RTTs=E2=80=A6 > > Yes, I should have talked about it as estimated RTT (interval) and a > seemingly desirable percentage(target). It is very helpful to think of > it that way if (as in my current testing) you are trying to see how > much better you can do at very short (sub 1ms) RTTs, where it really > is the interval you want to be modifying... oops. I meant target =3D interval >> 4; and would have decreased interval by a larger amount or something relative to the rate, but merely wanted to see the slope of the curve, and really need to write cake_drop_monitor rather than just "watch tc -s qdisc show dev eth0" > > I have been fiddling with as a proof of concept - not an actual > algorithm - how much shorter you can make the queues at short RTTs. > What I did was gradually (per packet) subtract 10ns from the cake > target while at 100% utilization until the target hit 1ms (or bytes > outstanding dropped below 3k). Had the cake code still used a > calculated target from the interval (target >> 4) I would have fiddled > with the interval instead. Using the netperf-wrapper tcp_upload test: > > There were two significant results from that (I really should just > start a blog so I can do images inline) > > 1) At 100Mbit, TSO offloads (bulking) add significant latency to > competing streams: > > http://snapon.lab.bufferbloat.net/~d/cake_reduced_target/offload_damage_1= 00mbit.png > > This gets much worse as you add tcp flows. I figure day traders would > take notice. TSO packets have much more mass. > > 2) You CAN get less packets outstanding at this RTT and still keep the > link 100% utilized. > > The default codel algo stayed steady at 30-31 packets outstanding with > no losses or marks evident (TSQ?) while the shrinking dynamic target > ecn marked fairly heavily and ultimately reduced the packets > outstanding to 7-17 packets with a slight improvement in actual > throughput. (This stuff is so totally inside the noise floor that it > is hard to discern a difference at all - and you can see the linux > de-optimization for handing ping packets off to hardware in some of > the tests, after the tcp flows end, which skews the latency figures) > > http://snapon.lab.bufferbloat.net/~d/cake_reduced_target/dynamic_target_v= s_static.png > > I think it is back to ns3 to get better grips on some of this. > >> >> >> Best Regards >> Sebastian >> >>> is, in my definition at the moment, "1 packet outstanding ever closer >>> to 100% of the time while there is 100% utilization". >>> >>> We could continue to bang on things (reducing the target or other >>> methods) and aim for a lower ideal setpoint until utilization dropped >>> below 100%. >>> >>> Which becomes easier the more flows we know are in progress. >>> >>>> - Jonathan Morton >>>> >>>> >>>> _______________________________________________ >>>> Cake mailing list >>>> Cake@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/cake >>>> >>> >>> >>> >>> -- >>> Dave T=C3=A4ht >>> Open Networking needs **Open Source Hardware** >>> >>> https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 >>> _______________________________________________ >>> Codel mailing list >>> Codel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/codel >> > > > > -- > Dave T=C3=A4ht > Open Networking needs **Open Source Hardware** > > https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67 --=20 Dave T=C3=A4ht Open Networking needs **Open Source Hardware** https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67