From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from masada.superduper.net (unknown [IPv6:2001:ba8:1f1:f263::2]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 5E71821F180 for ; Fri, 24 Apr 2015 19:24:21 -0700 (PDT) Received: from 199-116-72-167.public.monkeybrains.net ([199.116.72.167] helo=[192.168.0.12]) by masada.superduper.net with esmtpsa (TLS1.0:RSA_ARCFOUR_MD5:128) (Exim 4.80) (envelope-from ) id 1YlplO-0002cZ-GI; Sat, 25 Apr 2015 03:24:16 +0100 From: Simon Barber To: Sebastian Moeller , jb Date: Fri, 24 Apr 2015 19:24:10 -0700 Message-ID: <14cee639188.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> In-Reply-To: <6C0D04CF-53AA-4D18-A4E4-B746AF6487C7@gmx.de> References: <2C987A4B-7459-43C1-A49C-72F600776B00@gmail.com> <14cd9e74e48.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> <20150422040453.GB36239@sesse.net> <1429676935.18561.42.camel@edumazet-glaptop2.roam.corp.google.com> <12383_1429692679_55376107_12383_9099_1_p7gmr0psut68sen0sao8o4lp.1429692550899@email.android.com> <1429710657.18561.68.camel@edumazet-glaptop2.roam.corp.google.com> <25065_1429716388_5537BDA4_25065_2328_1_63pyislbvtjf653k3qt8gw2c.1429715929544@email.android.com> <1429717468.18561.90.camel@edumazet-glaptop2.roam.corp.google.com> <5537CDB7.60301@orange.com> <1429722979.18561.112.camel@edumazet-glaptop2.roam.corp.google.com> <5537DA20.1090008@orange.com> <5537DE4D.8090100@orange.com> <553882D7.4020301@orange.com> <1429771718.22254.32.camel@edumazet-glaptop2.roam.corp.google.com> <6C0D04CF-53AA-4D18-A4E4-B746AF6487C7@gmx.de> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 AquaMail/1.5.0.19 (build: 2100846) MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Score: -2.9 (--) Cc: bloat Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 25 Apr 2015 02:24:49 -0000 Perhaps where the green is should depend on the customer's access type. For instance someone on fiber should have a much better ping than someone on 3G. But I agree this should be a fixed scale, not dependent on idle ping time. Although VoIP might be good up to 100ms, gamers would want lower values. Simon Sent with AquaMail for Android http://www.aqua-mail.com On April 24, 2015 1:19:08 AM Sebastian Moeller wrote: > Hi jb, > > this looks great! > > On Apr 23, 2015, at 12:08 , jb wrote: > > > This is how I've changed the graph of latency under load per input from > you guys. > > > > Taken away log axis. > > > > Put in two bands. Yellow starts at double the idle latency, and goes to > 4x the idle latency > > red starts there, and goes to the top. No red shows if no bars reach into it. > > And no yellow band shows if no bars get into that zone. > > > > Is it more descriptive? > > Mmmh, so the delay we see consists out of the delay caused by the distance > to the server and the delay of the access technology, meaning the un-loaded > latency can range from a few milliseconds to several 100s of milliseconds > (for the poor sods behind a satellite link…). Any further latency > developing under load should be independent of distance and access > technology as those are already factored in the bade latency. In both the > extreme cases multiples of the base-latency do not seem to be relevant > measures of bloat, so I would like to argue that the yellow and the red > zones should be based on fixed increments and not as a ratio of the > base-latency. This is relevant as people on a slow/high-access-latency link > have a much smaller tolerance for additional latency than people on a fast > link if certain latency guarantees need to be met, and thresholds as a > function of base-latency do not reflect this. > Now ideally the colors should not be based on the base-latency at all but > should be at fixed total values, like 200 to 300 ms for voip (according to > ITU-T G.114 for voip one-way delay <= 150 ms is recommended) in yellow, and > say 400 to 600 ms in orange, 400ms is upper bound for good voip and 600ms > for decent voip (according to ITU-T G.114,users are very satisfied up to > 200 ms one way delay and satisfied up to roughly 300ms) so anything above > 600 in deep red? > I know this is not perfect and the numbers will probably require severe > "bike-shedding” (and I am not sure that ITU-T G.114 really iOS good source > for the thresholds), but to get a discussion started here are the numbers > again: > 0 to 100 ms no color > 101 to 200 ms green > 201 to 400 ms yellow > 401 to 600 ms orange > 601 to 1000 ms red > 1001 to infinity purple (or better marina red?) > > Best Regards > Sebastian > > > > > > (sorry to the list moderator, gmail keeps sending under the wrong email > and I get a moderator message) > > > > On Thu, Apr 23, 2015 at 8:05 PM, jb wrote: > > This is how I've changed the graph of latency under load per input from > you guys. > > > > Taken away log axis. > > > > Put in two bands. Yellow starts at double the idle latency, and goes to > 4x the idle latency > > red starts there, and goes to the top. No red shows if no bars reach into it. > > And no yellow band shows if no bars get into that zone. > > > > Is it more descriptive? > > > > > > On Thu, Apr 23, 2015 at 4:48 PM, Eric Dumazet wrote: > > Wait, this is a 15 years old experiment using Reno and a single test > > bed, using ns simulator. > > > > Naive TCP pacing implementations were tried, and probably failed. > > > > Pacing individual packet is quite bad, this is the first lesson one > > learns when implementing TCP pacing, especially if you try to drive a > > 40Gbps NIC. > > > > https://lwn.net/Articles/564978/ > > > > Also note we use usec based rtt samples, and nanosec high resolution > > timers in fq. I suspect the ns simulator experiment had sync issues > > because of using low resolution timers or simulation artifact, without > > any jitter source. > > > > Billions of flows are now 'paced', but keep in mind most packets are not > > paced. We do not pace in slow start, and we do not pace when tcp is ACK > > clocked. > > > > Only when someones sets SO_MAX_PACING_RATE below the TCP rate, we can > > eventually have all packets being paced, using TSO 'clusters' for TCP. > > > > > > > > On Thu, 2015-04-23 at 07:27 +0200, MUSCARIELLO Luca IMT/OLN wrote: > > > one reference with pdf publicly available. On the website there are > > > various papers > > > on this topic. Others might me more relevant but I did not check all of > > > them. > > > > > Understanding the Performance of TCP Pacing, > > > Amit Aggarwal, Stefan Savage, and Tom Anderson, > > > IEEE INFOCOM 2000 Tel-Aviv, Israel, March 2000, pages 1157-1165. > > > > > > http://www.cs.ucsd.edu/~savage/papers/Infocom2000pacing.pdf > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > > > > > _______________________________________________ > > Bloat mailing list > > Bloat@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/bloat > > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat