From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.20]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id E272521F3AA for ; Fri, 12 Jun 2015 12:21:07 -0700 (PDT) Received: from u-089-cab203a2.am1.uni-tuebingen.de ([134.2.89.4]) by mail.gmx.com (mrgmx103) with ESMTPSA (Nemesis) id 0LxPgU-1Z5J6o2Gz4-01711F; Fri, 12 Jun 2015 21:21:04 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Fri, 12 Jun 2015 21:21:01 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <08DFF433-9457-4A3A-92AA-A3B97828F444@gmx.de> References: <6AD8E150-5751-43AC-8F6C-8175C1E92DE1@gmx.de> To: Alex Elsayed X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:Qwn+TsXczQnrURb0Eafs19BNK62TBz59+e6pjVEdFamVrFmReN1 UAk3hgjirt/NIYoVT8VCqjCig7xyRoQWTZmwG3rmw9/LwabXaGpD/w/aiRDzKl2xuxX1kPN SvQYKLv79ob67lMydRGdQ2k+myy4gOcTkOvVHCvKIIBu/3sl1YZVZ4MCaUVIk+uMxYatvHF 3FqaZnbi9MpXVsLxZQHOg== X-UI-Out-Filterresults: notjunk:1; Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] Bloat done correctly? X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Jun 2015 19:21:36 -0000 Hi Alex, On Jun 12, 2015, at 20:51 , Alex Elsayed wrote: > Sebastian Moeller wrote: >=20 >> Hi Benjamin, >>=20 >> To go off onto a tangent: >>=20 >> On Jun 12, 2015, at 06:45 , Benjamin Cronce >> wrote: >>=20 >>> [...] >>> Under load while doing P2P(About 80Mb down and 20Mb up just as I = started >>> the test) HFSC: P2P in 20% queue and 80/443/8080 in 40% queue with = ACKs >>> going to a 20% realtime queue = http://www.dslreports.com/speedtest/622452 >>=20 >> I know this is not really your question, but I think the ACKs should = go >> into the same queue as the matching data packets. Think about it that = way, >> if the data is delayed due to congestion it does not make too much = sense >> to tell the sender to send more faster (which essentially is what ACK >> prioritization does) as that will not really reduce the congestion = but >> rather increase it. There is one caveat though: when ECN is used it = might >> make sense to send out the ACK that will signal the congestion state = back >> to the sender faster=85 So if you prioritize ACKs only select those = with an >> ECN-Echo flag ;) @bloat : What do you all think about this refined = ACK >> prioritization scheme? >=20 > I'd say that this is wrongly attempting to bind upstream congestion to=20= > downstream congestion. >=20 > Let's have two endpoints, A and B. There exists a stream sent from A = towards=20 > B. >=20 > If A does not receive an ack from B in a timely manner, it draws = inference=20 > as to the congestion on the path _towards_ B. Prioritizing acks from B = to A=20 > thus makes this _more accurate to reality_ - a lost ack (rather than = the=20 > absence of an ack due to a lost packet) actually behaves as = misinformation=20 > to the sender, causing them to So my silent assumption was that we talk about a debloated = access link, after all this is the bloat list and we think we have = solved most of that problem. So there is no major congestion on the part = of the uplink where prioritization would work (the home router=92s = egress interface), so not misinformation there. As I started in another = mail to Benjamin, instead of ACK prioritization I would de-bloat the = access link ;) I add that the currently recommenders solution shaper+fq_codel = or cake both give some precedence to sparse flows, which does boost = small packets like ACKs (until there are too many competing spade flows = then all flows are treated as non-sparse, both AQMs also IIRC preferably = drop/mark packets from large flows so ACKs will still get some love on = upstream congestion). >=20 > 1.) back off sending when the sending channel is not congested and > 2.) resend a packet that _already arrived_. But TCP ACKs are cumulative so the information from a lost ACK = are also included in the next, so you need to loose a stretch of ACKs = before your scenario becomes relevant, no? >=20 > The latter point is a big one: Prioritized ACKs (may) reduce spurious=20= > resends, especially on asymmetric connections - and suprious resends = are=20 > pure network inefficiency. Especially since the data packets are = likely far=20 > larger than the ACKs. Which would _also_ get resent. But for the spurious resends you either breed to drop several = ACKs in a row or delay the ACKs long enough that the RTO triggers; both = are situation I would recommend to avoid anyways ;) So I am still not = convinced on the ACK priority rationale, assuming a de-bloated access = link. If you have enough control over the link to implement ACK games, I = believe you are better off de-bloating it more thoroughly=85 Again not = an expert just a layman=92s opinion. Best Regards Sebastuian >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat