From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-x229.google.com (mail-lf0-x229.google.com [IPv6:2a00:1450:4010:c07::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 8E99B3B29E for ; Wed, 4 Jul 2018 16:25:42 -0400 (EDT) Received: by mail-lf0-x229.google.com with SMTP id n24-v6so5187547lfh.3 for ; Wed, 04 Jul 2018 13:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Ru0IfALMSvLKAYS6Y7UlkYDLILJV6Dayu4B5T5e25Cc=; b=sNsVHrRYX0hlwB+OO/8ewhmmHsTig74lD7s1yP+PSWFzp3xVOMLuogZc16nD+sKosr L0uPuK9nyUaUznpSa7kURaOpBnsZuqoTEeR+qwxrr7NwAy12DqJ/tPmgedfM/S5vC11l EkTFid2X+nrHpT5JFpIq17Hjcwoz/GLiUF2WpLCPojStnxNKRwhDsFMaKM/mfQ6UbpLc RQUO0c4j8lVICZqJ4tCHKTzefjcKybKdxsIKjWNDndSRXxFD8nIfj8z2ugQe3vyV4FEk M5iVqNoqjvDgnkPo6cys6BVdEx2tIoe6RUiNRo9Tz75h91ojIDmcnstIY3gqyrgj1YhV 6rog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ru0IfALMSvLKAYS6Y7UlkYDLILJV6Dayu4B5T5e25Cc=; b=pqXwst/f94pSWYDWexW5BhfBJQS1qJYkJ/JqTVYTiFzMQ+4TRBo1K2hALfrPMquyw5 fCFJ15b/NYcRvhR870lQPS691t/Qes1hD3CODpRoUDNScIGgxbxeXc6PGOKQh31v7qJW ZkmtNcovcgCJvELsRgd2vb/iWNDK6QwSe1xIyUik7IG5svCpuaT5Y5gLChEIv4Z0V7sb dgfUVz4rDe41YxmxgUAIous6jAJ44+yQoyY+vGNGh0eBRaCNS3hKnMym5gwhVaVlFsmG iBjXan0YVXBVg8/7IuSlvzJdI6jM92cWPwpQVde1bGAq+mf5DpCvPXZeRJXsIuFg6/F6 2JYg== X-Gm-Message-State: APt69E1mOnFb1Y8/PsED03AGctPjFPB+Unq6EFqbwp9jdxcO8TpnPNO8 wGZ9OROncfm6r1i6tqe5Vt6Qerrr3MV/DHyskDk= X-Google-Smtp-Source: AAOMgpcZF0iua7cT77oHJ5otT189wwzrtTBytqsohgJ3x0aNTfIzYT9eB4rf+hFJo+LIR7/zGRGHIunTZU03OYC6JOI= X-Received: by 2002:a19:e803:: with SMTP id f3-v6mr2432189lfh.84.1530735941376; Wed, 04 Jul 2018 13:25:41 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Benjamin Cronce Date: Wed, 4 Jul 2018 15:25:27 -0500 Message-ID: To: =?UTF-8?Q?Jonas_M=C3=A5rtensson?= Cc: pete@heistp.net, bloat Content-Type: multipart/alternative; boundary="00000000000007e0380570323cf5" Subject: Re: [Bloat] powerboost and sqm X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 Jul 2018 20:25:47 -0000 --00000000000007e0380570323cf5 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Strict token bucket without fair queuing can cause packetloss bursts for all flows. In my personal experience when dealing with a low(single digit) RTT, I would find that my ex-50Mb connection would accept a 1Gb burst and ACK all of the data. Then the sender would think I had a 1Gb link and keep sending at 1Gb. Around the 200ms mark, there would be a steep slope where all of my traffic would suddenly see ~5% loss for the rest of that second. Once steady state was reached, it was fine. The issue seemed to have a baseline relative to the ratio between the provisioned rate and the burst rate, with a dynamic multiplier not-quite-linearly driven by the link's current utilization. At ~0% average utilization, bursts that lasted longer than the bucket could induce maximum, and not much of an issue past 80%. I could reliably recreate the issue by loading a high bandwidth video on youtube and jumping around the timeline to unbuffered segments. I had anywhere from 6ms to 12ms latency to youtube CDNs depending on the route and which datacenter. Not only could I measure the issue with icmp at 100 samples per second, but I could reliably see issues in-game with either UDP or TCP based games. Simply shaping to 1-2Mb below my provisioned rate and enabling Codel seemed to alleviate the issue into not-noticing. On Sun, Jul 1, 2018 at 4:50 PM Jonas M=C3=A5rtensson wrote: > > > On Sat, Jun 30, 2018 at 9:46 AM Pete Heist wrote: > >> >> On Jun 30, 2018, at 8:26 AM, Jonas M=C3=A5rtensson >> wrote: >> >>> >>> I played around with flent a bit, here are some example plots: >> >> https://dl.dropbox.com/s/facariwkp5x5dh1/flent.zip?dl=3D1 >> >> The short spikes are not seen with flent so I'm led to believe these are >> just a result of running the "Hi-Res" dslreports test in a browser. In t= he >> flent rrul test, up to about 10 ms induced latency can be seen during th= e >> "powerboost" phase but after that it is almost zero. I'm curious about h= ow >> this is implemented on the ISP side. If anything, sqm seems to induce a = bit >> more latency during the "steady-state" phase. >> >> >> You may also want to try running flent with --socket-stats and making a >> tcp_rtt plot. You should see a significant difference in TCP RTT between >> sfq and anything that uses CoDel. >> > > In case anyone is curious I tried this and the tcp rtt plot looks very > similar to the ping rtt plot, i.e. the latencies are the same. > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat > --00000000000007e0380570323cf5 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Strict token bucket without fair queuing can cause packetl= oss bursts for all flows. In my personal experience when dealing with a low= (single digit) RTT, I would find that my ex-50Mb connection would accept a = 1Gb burst and ACK all of the data. Then the sender would think I had a 1Gb = link and keep sending at 1Gb. Around the 200ms mark, there would be a steep= slope where all of my traffic would suddenly see ~5% loss for the rest of = that second. Once steady state was reached, it was fine. The issue seemed t= o have a baseline relative to the ratio between the provisioned rate and th= e burst rate, with a dynamic multiplier=C2=A0 not-quite-linearly driven by = the link's current utilization. At ~0% average utilization, bursts that= lasted longer than the bucket could induce maximum, and not much of an iss= ue past 80%.

I could reliably recreate the issue by load= ing a high bandwidth video on youtube and jumping around the timeline to un= buffered segments. I had anywhere from 6ms to 12ms latency to youtube CDNs = depending on the route and which datacenter. Not only could I measure the i= ssue with icmp at 100 samples per second, but I could reliably see issues i= n-game with either UDP or TCP based games. Simply shaping to 1-2Mb below my= provisioned rate and enabling Codel seemed to alleviate the issue into not= -noticing.=C2=A0

On Sun, Jul 1, 2018 at 4:50 PM Jonas M=C3=A5rtensson <martensson.jonas@gmail.com> wrote:
<= /div>


On Sat, Jun 30, 2018 at 9:46 AM Pete Heist &l= t;pete@heistp.net&= gt; wrote:

On Jun 30, 2018, at 8:26 AM, Jonas M=C3=A5rtensson <martensson.jonas@gmail.co= m> wrote:

I played aroun= d with flent a bit, here are some example plots:

<= span style=3D"background-color:rgb(255,255,255);text-decoration-style:initi= al;text-decoration-color:initial;float:none;display:inline">ht= tps://dl.dropbox.com/s/facariwkp5x5dh1/flent.zip?dl=3D1

= The short spikes are not seen with flent so I'm led to believe these ar= e just a result of running the "Hi-Res" dslreports test in a brow= ser. In the flent rrul test, up to about 10 ms induced latency can be seen = during the "powerboost" phase but after that it is almost zero. I= 'm curious about how this is implemented on the ISP side. If anything, = sqm seems to induce a bit more latency during the "steady-state" = phase.

You may al= so want to try running flent with --socket-stats and making a tcp_rtt plot.= You should see a significant difference in TCP RTT between sfq and anythin= g that uses CoDel.

In case anyo= ne is curious I tried this and the tcp rtt plot looks very similar to the p= ing rtt plot, i.e. the latencies are the same.
_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
--00000000000007e0380570323cf5--