From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-io0-x22f.google.com (mail-io0-x22f.google.com [IPv6:2607:f8b0:4001:c06::22f]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 5239E3CB37 for ; Wed, 18 Apr 2018 09:13:09 -0400 (EDT) Received: by mail-io0-x22f.google.com with SMTP id f22-v6so235238ioc.11 for ; Wed, 18 Apr 2018 06:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=e9I+vLCS/n+nOKnCtYBKhROjEjIgZ0B/ZdToekx4tJY=; b=SikryZIdJrgxIhUDaGCYk6LuLn1Iqf0gZdzmui336QbY66syJ17kogfogGmZvlRoBi V+Q2OQGEMsEY3lfB1mgI/eeyEEysxghutD2+mQzbkFTSsM71K8nExAQhCW2uKxgpv30w cj33T75WWukPpHcmO5SPPwIRMI+lh/cUzHQwfROECLU4ZEHrCvTtMzG3LfOvXro6icIV T3TRxFpE1L6jooiDx0eug5g0Dzx23jd51JSa6nZ0j5APWc5Ha5W1Fl384OuzjBDb7Qlw 3iun7Pw+ibc7P8etMFi7QCSUvjvFDgP/64GtZbZaJQT5C1DGLpcYTZdNFf+l+gikxBMu 9Wtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=e9I+vLCS/n+nOKnCtYBKhROjEjIgZ0B/ZdToekx4tJY=; b=UWZm+vZ1McUohE25paObzM27OLnmTYgtYzvMdiEDJQa/Z3ceLh4nT/lI1Y0nnfIpFC 3b+fWVl5lt5SLzhElMMGDXWaw8OEpBoEsExD6AiCh3d2laygvsrPbYPqwEbOyr5ohCvF atzrJF0pdU/KYrVq9NLM3p00NuG9N9prI10a7ei7eq2xEelmYkPFTLqfuDGFE+ocB1M4 qaJLLIbKPbHjGcY/jTKu8I2Ywrl50WC6bDI9zhNiBUCfozWJE0OJN452o/DCx2RZds+i 92SOFpU2Nut5xRvPCKnGz8AbGlIceXJXDqmfWouorI3Mt3Uwa4IFDXl9yFnayqIfTLn7 BC4Q== X-Gm-Message-State: ALQs6tBbUfOIIjvaS+pemqVH0HY4z5q5NEPCgyFvg0z8lLaiLKBU+Ghl b/O+VEyK1+agrkkOiJGL2fOyd4YnhiDKca4jC4U= X-Google-Smtp-Source: AB8JxZrurgBD+cb2unESAkozlPvY72wpyzVft+WVO1NZuHreSayjo3cfQIWoKtsdS90TV8amzLk+Lk0Dy+XAjRvK78o= X-Received: by 2002:a6b:a867:: with SMTP id r100-v6mr1874862ioe.143.1524057188512; Wed, 18 Apr 2018 06:13:08 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:6c85:0:0:0:0:0 with HTTP; Wed, 18 Apr 2018 06:13:08 -0700 (PDT) In-Reply-To: <87604o3get.fsf@toke.dk> References: <87vacq419h.fsf@toke.dk> <874lk9533l.fsf@toke.dk> <87604o3get.fsf@toke.dk> From: =?UTF-8?Q?Jonas_M=C3=A5rtensson?= Date: Wed, 18 Apr 2018 15:13:08 +0200 Message-ID: To: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= Cc: Jonathan Morton , Cake List Content-Type: multipart/alternative; boundary="00000000000056bbc9056a1f3750" Subject: Re: [Cake] A few puzzling Cake results X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Apr 2018 13:13:09 -0000 --00000000000056bbc9056a1f3750 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, Apr 18, 2018 at 1:25 PM, Toke H=C3=B8iland-J=C3=B8rgensen wrote: > Toke H=C3=B8iland-J=C3=B8rgensen writes: > > > Jonathan Morton writes: > > > >>> On 17 Apr, 2018, at 12:42 pm, Toke H=C3=B8iland-J=C3=B8rgensen > wrote: > >>> > >>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoDel > >>> controls TCP flow latency to around 65 ms, while for Cake it is all > >>> the way up around the 180ms mark. Is the Codel version in Cake too > >>> lenient, or what is going on here? > >> > >> A recent change was to increase the target dynamically so that at > >> least 4 MTUs per flow could fit in each queue without AQM activity. > >> That should improve throughput in high-contention scenarios, but it > >> does come at the expense of intra-flow latency when it's relevant. > > > > Ah, right, that might explain it. In the 128 flow case each flow has > > less than 100 Kbps available to it, so four MTUs are going to take a > > while to dequeue... > > OK, so I went and looked at the code and found this: > > bool over_target =3D sojourn > p->target && > sojourn > p->mtu_time * bulk_flows * 4; > > > Which means that we scale the allowed sojourn time for each flow by the > time of four packets *times the number of bulk flows*. > > So if there is one active bulk flow, we allow each flow to queue four > packets. But if there are ten active bulk flows, we allow *each* flow to > queue *40* packets. I'm confused. Isn't the sojourn time for a packet a result of the total number of queued packets from all flows? If each flow were allowed to queue 40 packets, the sojourn time would be mtu_time * bulk_flows * 40, no? --00000000000056bbc9056a1f3750 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Wed, Apr 18, 2018 at 1:25 PM, Toke H=C3=B8iland-J=C3=B8rgensen <toke@tok= e.dk> wrote:
Toke H=C3=B8iland-J=C3=B8rgensen <tok= e@toke.dk> writes:

> Jonathan Morton <chromatix= 99@gmail.com> writes:
>
>>> On 17 Apr, 2018, at 12:42 pm, Toke H=C3=B8iland-J=C3=B8rgensen= <toke@toke.dk> wrote:
>>>
>>> - The TCP RTT of the 32 flows is *way* higher for Cake. FQ-CoD= el
>>>=C2=A0 controls TCP flow latency to around 65 ms, while for Cak= e it is all
>>>=C2=A0 the way up around the 180ms mark. Is the Codel version i= n Cake too
>>>=C2=A0 lenient, or what is going on here?
>>
>> A recent change was to increase the target dynamically so that at<= br> >> least 4 MTUs per flow could fit in each queue without AQM activity= .
>> That should improve throughput in high-contention scenarios, but i= t
>> does come at the expense of intra-flow latency when it's relev= ant.
>
> Ah, right, that might explain it. In the 128 flow case each flow has > less than 100 Kbps available to it, so four MTUs are going to take a > while to dequeue...

OK, so I went and looked at the code and found this:

=C2=A0 =C2=A0 =C2=A0 =C2=A0 bool over_target =3D sojourn > p->target = &&
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0sojourn > p->mtu_time * bulk_flows * 4;


Which means that we scale the allowed sojourn time for each flow by the
time of four packets *times the number of bulk flows*.

So if there is one active bulk flow, we allow each flow to queue four
packets. But if there are ten active bulk flows, we allow *each* flow to queue *40* packets.

I'm confused. Isn&#= 39;t the sojourn time for a packet a result of the total number of queued p= ackets from all flows?=C2=A0 If each flow were allowed to queue 40 packets,= the sojourn time would be mtu_time * bulk_flows * 40, no?

<= /div>
--00000000000056bbc9056a1f3750--