From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-x229.google.com (mail-qk0-x229.google.com [IPv6:2607:f8b0:400d:c09::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8E7B93BA8E for ; Tue, 12 Jun 2018 02:47:27 -0400 (EDT) Received: by mail-qk0-x229.google.com with SMTP id d130-v6so14456853qkc.2 for ; Mon, 11 Jun 2018 23:47:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=eWuNaBw1ZZzUIJDhDPXsZMiBVXZ2gSDu0jda9AM/lKw=; b=TAe6UBYoPZf66RRokh2wwSnuB1poj5wta8DxRIQwwkxB41QYmn5FY9B4iILEQDbJOM qdy305nCbjYqjLoy7nw1+Bxk3PPpiRUSeR2KEenUbyIVs1y02B70ar9oWCgk3fyAIKbQ IHktWnHIrdDoJ11C+DKFNUJk9HELRE+FpujlJN+fQgZpUaMmyFY0N8+CIDB3DAEhpea9 KxsCwkuSKjCL119YKcg3nmYp8vKF0ViKZl7frBiYJ5YtFm1BWmXO9LzmuDKxz2LoNTXu EXrKTD9fTgyKibpgJKIT7q1n6lm1hvgj1RFKsJBkXs1/w8gVF8XbkvYhnRkUoldVdAi9 ZjwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=eWuNaBw1ZZzUIJDhDPXsZMiBVXZ2gSDu0jda9AM/lKw=; b=DvAicGKCqclZl9udidEGMYYxOsH+YpOzgfUt+jjUzKS9tQz5lrX0y/umdua02F1zGH oz0rN7pTVSqLG1Mht8g0Mpc8xBxZ4eq6TkSq3/e/KevxqAVoOWykSHMlh9KxsWaOGL/X i7i/NEoN8utM+NFH6V8fSzB1bFcEwafam89qsJoS8HKMt1o00lHHkyBZm1voHXQBrhuE t6k3xRVeQ+Be4UPdJDF/zinucCcA1QNnKHmFRm5kH17f919b+GE+eqyqBIyZXrkC4f6m GvlbWEnaTYeDAcCQcQdiFRDn1x2+YjZi1n7Nw+ZN5dtAbvFBhT/uBJD42GvaAsVoctIQ 5NKQ== X-Gm-Message-State: APt69E1kKd6t4+RgGPfRXWHGB8RDWIPdFj53Klku4o3yopddVR+ov82p vcjJDuy4PbC/2dtyM7OExd9pweBKU4urX8Sk814= X-Google-Smtp-Source: ADUXVKJ/ow90xjfVbVFPG4xozIwmseHs1YCOJEgFq/bu11dquRAm2tNgAj8LyM7gncA7mmZFQB9gM/el1MeZeHoTNy4= X-Received: by 2002:a37:2801:: with SMTP id o1-v6mr2101428qkh.319.1528786047136; Mon, 11 Jun 2018 23:47:27 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:aed:24f0:0:0:0:0:0 with HTTP; Mon, 11 Jun 2018 23:47:26 -0700 (PDT) In-Reply-To: References: <152717340941.28154.812883711295847116.malone@soybean.canonical.com> <4f67f9b3-05a1-8d15-0aee-dfe8ea730d7c@gmail.com> <73c25a21-0ace-b5ee-090e-d06fb3b8dc60@kit.edu> <3F65061F-4F05-4F3F-8A43-FFCC1D27F585@gmail.com> <61E48C91-AEF9-4FF4-9F83-45EC7148EC54@jonathanfoulkes.com> <9675C88A-FCC0-43EB-9C71-CBEFD67408CB@gmx.de> <6AD85E99-BCD8-4548-AAA4-F5B08599C7AD@gmail.com> <36BE9775-A306-4DA3-B2D9-430FF07E391C@gmx.de> <3E669490-800E-40AE-B172-A99CB615822F@gmail.com> From: Dave Taht Date: Mon, 11 Jun 2018 23:47:26 -0700 Message-ID: To: Jonathan Morton Cc: Sebastian Moeller , bloat Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [Bloat] [Bug 1436945] Re: devel: consider fq_codel as the default qdisc for networking X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Jun 2018 06:47:27 -0000 as for the tail loss/rto problem, doesn't happen unless we are already in a drop state for a queue, and it doesn't happen very often, and when it does, it seems like a good idea to me to so thoroughly back off in the face of so much congestion. fq_codel originally never dropped the last packet in the queue, which led to a worst case latency of 1024 * mtu at the bandwidth. That got fixed and I'm happy with the result. I honestly don't know what cake does anymore except that jonathan rarely tests at real rtts where the amount of data in the pipe is a lot more than what's sane to have queued, where I almost always have realistic path delays. It would be good to resolve this debate in some direction one day, perhaps by measuring utilization > 0 on a wide range of tests. On Mon, Jun 11, 2018 at 11:39 PM, Dave Taht wrote: > "So there is no effect on other flows' latency, only subsequent > packets in the same flow - and the flow is always hurt by dropping > packets, rather than marking them." > > Disagree. The flow being dropped from will reduce its rate in an rtt, > reducing the latency impact on other flows. > > I regard an ideal queue length as 1 packet or aggregate, as "showing" > all flows the closest thing to the real path rtt. You want to store > packets in the path, not buffers. > > ecn has mass. It is trivial to demonstrate an ecn marked flow starving > out a non-ecn flow, at low rates. > > On Wed, Jun 6, 2018 at 6:04 AM, Jonathan Morton w= rote: >>>>> The rationale for that decision still is valid, at low bandwidth ever= y opportunity to send a packet matters=E2=80=A6 >>>> >>>> Yes, which is why the DRR++ algorithm is used to carefully choose whic= h flow to send a packet from. >>> >>> Well, but look at it that way, the longer the traversal path after the = cake instance the higher the probability that the packet gets dropped by a = later hop. >> >> That's only true in case Cake is not at the bottleneck, in which case it= will only have a transient queue and AQM will disengage anyway. (This ass= umes you're using an ack-clocked protocol, which TCP is.) >> >>>>> =E2=80=A6and every packet being transferred will increase the queued = packets delay by its serialization delay. >>>> >>>> This is trivially true, but has no effect whatsoever on inter-flow ind= uced latency, only intra-flow delay, which is already managed adequately we= ll by an ECN-aware sender. >>> >>> I am not sure that I am getting your point=E2=80=A6 >> >> Evidently. You've been following Cake development for how long, now? T= his is basic stuff. >> >>> =E2=80=A6at 0.5Mbps every full-MTU packet will hog the line foe 20+ mil= liseconds, so all other flows will incur at least that 20+ ms additional la= tency, this is independent of inter- or intra-flow perspective, no?. >> >> At the point where the AQM drop decision is made, Cake (and fq_codel) ha= s already decided which flow to service. On a bulk flow, most packets are t= he same size (a full MTU), and even if the packet delivered is the last one= presently in the queue, probably another one will arrive by the time it is= next serviced - so the effect of the *flow's* presence remains even into t= he foreseeable future. >> >> So there is no effect on other flows' latency, only subsequent packets i= n the same flow - and the flow is always hurt by dropping packets, rather t= han marking them. >> >> - Jonathan Morton >> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat > > > > -- > > Dave T=C3=A4ht > CEO, TekLibre, LLC > http://www.teklibre.com > Tel: 1-669-226-2619 --=20 Dave T=C3=A4ht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619