From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id EFB403CB37 for ; Thu, 7 Oct 2021 06:30:41 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1633602637; bh=tFjxAzkFEkzIJi2wSAXd02diDzLwNno4ALiQdqGGQ0M=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=X4/V02NolznIKE0OsB3MP4PIH1MG/1yvpH3H6Ss37nSKxBjWttvlXErqeaYyP+mI8 XyZVtR/SonkJquYyGpycufOgoHxfds6ZJGpMh126P7PYWUaaj+Pi+/UIvOgAxEyz1w S1iSmeebK2rQ+oinTpCcSO4Zh/yywFiRQyfKuJmA= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.250.101] ([134.76.241.253]) by mail.gmx.net (mrgmx105 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MmlXK-1nEfiV2DdG-00jrjt; Thu, 07 Oct 2021 12:30:37 +0200 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.7\)) From: Sebastian Moeller In-Reply-To: Date: Thu, 7 Oct 2021 12:30:36 +0200 Cc: Jonathan Morton , Rpm Content-Transfer-Encoding: quoted-printable Message-Id: <45E18E9B-DD71-4F8A-92C2-AB5AA4439DC0@gmx.de> References: To: Christoph Paasch X-Mailer: Apple Mail (2.3608.120.23.2.7) X-Provags-ID: V03:K1:m18dN6Zzf6o8zDrKyUZBBa/iejeQbvCXh3EvMPqVxBEMuVUYYDL NLMHM9+ta/evOJXwM0hRe3iooHl0A9IyQAGIux8csRx2KW5/cQNpW6ZUXMPqvG7q6Q5vvja 9NVqqQnAo+OvFf1lpRTNbHQxJAMa3BEBerZGykO9W2q3rjqixJ6/cQhZJAmIQHr1vPKOGvU ImLXksSgXyVXgJd8tk4Ng== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:X3bQKE3H2rw=:PWjFa9pj7hjxBf+eZh2L5D teTkRjJFGME21Rf6UNrw3tVaX2MDWKrhbDDFTzMogSKlpggPDhJLHongMNZAj0ObL3xf2nCrK 4peKMzYvJBpX4sq9DYLtgGCej4Qowdg7heBu9aV2lVkACunma3oy1Q+g1EkadUznPMbRF0zUq OLGVwBASlosKr3LpnhORBTRcDdVSWGDdnrbcHz0Wn4T4Ja/C+QWS2ZxAqnzeumvGKmdZnu/rq 6FhFCxYwZETfssN+QBy7dLX61lUJXoT3Om+wUKVA40fvCbOskWdECmOXuFJeNHBfB4WF3I+Gc 8exSGwQX5A+PRnYiglORSWL+wTI6RQuwg533ab3MxKHLKWpz9IFdgxfTkIzTz6TRlvXN2p1a6 uM2UnRR9YHCU5I+PKkL2fn04WHGuQ/94xabStpnaKnxUCNLYFeY12W2A/oZy9nQvAhGj93xE0 Ve1Hl/V9rO+LuXe5lYwhGubXSG63Zd1CgYXF2p3kSq5dAU0QZAA8BzzILjF5eDI76wg9j2HG5 aenLcwO3aVTcVBxRveai1LOJdkuHzOybIbrmwTaX9jJash/B2TeTCJSJt5EDa86F0ZCczIYAr jgqvuXw1HbNJu+kHJVxrB2UH3R6dYSJKiGSmqzKnjYPlhkBXY9GBiLYzY9phHwIygiq3XbNOU KT/spZxAU4n61T8daOgKYSEaoQa03Mpeyqwb+aQn9/ntdosfhXqQmQAsgJEhwRvU/qAJrZxtT 62ivrU6uvBzhHznaBPMpUqtqA4wKxsYo8wZu3yGLqV55VNGoC/dvSKY3b03uJDDiM3NoW1oOf 4vRAyAJjFfpi+RSjlMiWVvZGAtiatEwNVKpLOBzkZqcF9v0AxVWavvoT0sdydd0j/Awv7J4Cz UlWvzfgEyXJKiWDRTdbFTyQFny9uh4FXXlwSBGkjYuSmkK0APPUeyC23Au+vH4JQw/JSiaBzw wYaYX4CQN841VyOCvc1nH6NDhs8zZFqtfw4VkyB35blZ00fdkDxMPsWABfPTu4olS63sKEuK9 tC0lbpHsKuOK8MEyNkStfbCkrtx4nPBUCZDe5iIiLW6HEYS9RCdOHL8fjHIG3PMkfkRSN9pMZ baKqSTnbjj/3rE= Subject: Re: [Rpm] Alternate definitions of "working condition" - unnecessary? X-BeenThere: rpm@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: revolutions per minute - a new metric for measuring responsiveness List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 07 Oct 2021 10:30:42 -0000 Hi Christoph, > On Oct 7, 2021, at 02:11, Christoph Paasch via Rpm = wrote: >=20 > On 10/07/21 - 02:18, Jonathan Morton via Rpm wrote: >>> On 7 Oct, 2021, at 12:22 am, Dave Taht via Rpm = wrote: >>> There are additional cases where, perhaps, the fq component works, = and the aqm doesn't. >>=20 >> Such as Apple's version of FQ-Codel? The source code is public, so = we might as well talk about it. >=20 > Let's not just talk about it, but actually read it ;-) >=20 >> There are two deviations I know about in the AQM portion of that. = First is that they do the marking and/or dropping at the tail of the = queue, not the head. Second is that the marking/dropping frequency is = fixed, instead of increasing during a continuous period of congestion as = real Codel does. >=20 > We don't drop/mark locally generated traffic (which is the use-case we = care abhout). In this discussion probably true, but I recall that one reason = why sch_fq_codel is a more versatile qdisc compared to sch_fq under = Linux is that fq excels for locally generated traffic, while fq_codel is = also working well for forwarded traffic. And I use "forwarding" here to = encompass things like VMs running on a host, where direct = "back-pressure" will not work...=20 > We signal flow-control straight back to the TCP-stack at which point = the queue > is entirely drained before TCP starts transmitting again. >=20 > So, drop-frequency really doesn't matter because there is no drop. But is it still codel/fq_codel if it does not implement head = drop (as described in = https://datatracker.ietf.org/doc/html/rfc8290#section-4.2) and if the = control loop (https://datatracker.ietf.org/doc/html/rfc8289#section-3.3) = is changed? (I am also wondering how reducing the default number of = sub-queues from 1024 to 128 behaves on the background of the birthday = paradox). Best Regards Sebastian P.S.: My definition of working conditions entails bidirectionally = saturating traffic with responsive and (transiently) under-responsive = flows. Something like a few long running TCP transfers to generate = "base-load" and a higher number of TCP flows in IW or slow start to add = some spice to the whole. In the future, once QUIC actually takes off*, = adding more well defined/behaved UDP flows to the mix seems reasonable. = My off the cuff test for the effect of IW used to be to start a browser = and open a collection of (30-50) tabs getting a nice "thundering herd" = of TCP flows starting around the same time. But it seems browser makers = got too smart for me and will not do what I want any more but temporally = space the different sites in the tabs so that my nice thundering herd is = less obnoxious (which IMHO is actually the right thing to do for actual = usage, but for testing it sucks). *) Occasionally browsing the NANOG archives makes me wonder how the move = from HTTP/TCP to QUIC/UDP is going to play with operators propensity to = rate-limit UDP, but that is a different kettle of fish... >=20 >=20 > Christoph >=20 >>=20 >> I predict the consequences of these mistakes will differ according to = the type of traffic applied: >>=20 >> With TCP traffic over an Internet-scale path, the consequences are = not serious. The tail-drop means that the response at the end of = slow-start will be slower, with a higher peak of intra-flow induced = delay, and there is also a small but measurable risk of tail-loss = causing a more serious application-level delay. These alone *should* be = enough to prompt a fix, if Apple are actually serious about improving = application responsiveness. The fixed marking frequency, however, is = probably invisible for this traffic. >>=20 >> With TCP traffic over a short-RTT path, the effects are more = pronounced. The delay excursion at the end of slow-start will be larger = in comparison to the baseline RTT, and when the latter is short enough, = the fixed congestion signalling frequency means there will be some = standing queue that real Codel would get rid of. This standing queue = will influence the TCP stack's RTT estimator and thus RTO value, = increasing the delay consequent to tail loss. >>=20 >> Similar effects to the above can be expected with other reliable = stream transports (SCTP, QUIC), though the details may differ. >>=20 >> The consequences with non-congestion-controlled traffic could be much = more serious. Real Codel will increase its drop frequency continuously = when faced with overload, eventually gaining control of the queue depth = as long as the load remains finite and reasonably constant. Because = Apple's AQM doesn't increase its drop frequency, the queue depth for = such a flow will increase continuously until either a delay-sensitive = rate selection mechanism is triggered at the sender, or the queue = overflows and triggers burst losses. >>=20 >> So in the context of this discussion, is it worth generating a type = of load that specifically exercises this failure mode? If so, what does = it look like? >>=20 >> - Jonathan Morton >> _______________________________________________ >> Rpm mailing list >> Rpm@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/rpm > _______________________________________________ > Rpm mailing list > Rpm@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/rpm