From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id CCFEC21F649 for ; Sat, 26 Jul 2014 15:00:56 -0700 (PDT) Received: from hms-beagle.home.lan ([217.231.210.84]) by mail.gmx.com (mrgmx103) with ESMTPSA (Nemesis) id 0McUnM-1WtdAb1BXU-00HdQU; Sun, 27 Jul 2014 00:00:53 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Sun, 27 Jul 2014 00:00:50 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: References: <03292B76-5273-4912-BB18-90E95C16A9F5@pnsol.com> <66FF8435-C8A5-4596-B43A-EC12D537D49E@gmx.de> <41DF4003-BAE8-4794-BEDF-EF2385F03685@gmx.de> <1406326625.225312181@apps.rackspace.com> To: David Lang X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:WdE1vSjjQdSHAIp5cKCERRNOUItfW2n30o33hodFZdj5mqj7YGp HKHzzk2r8Boet/5HShcCs1NW+5iV91vKw0FilbEWRTGoSuF+F5QhZrNCPH9yAsP/Fjj2MOV sR9s7UJC+z03g++Vq2KcbjGpjGEln761RVs33Bc1FhSWorU5gxEIfQyqyV9w2WPwQkUsKVK Ghb7RhoL3NlhbiJnG3j8w== Cc: cerowrt-devel , bloat Subject: Re: [Cerowrt-devel] [Bloat] Check out www.speedof.me - no Flash X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 26 Jul 2014 22:00:57 -0000 Hi David, On Jul 26, 2014, at 22:53 , David Lang wrote: > On Sat, 26 Jul 2014, Sebastian Moeller wrote: >=20 >> On Jul 26, 2014, at 01:26 , David Lang wrote: >>=20 >>> But I think that what we are seeing from teh results of the = bufferbloat work is that a properly configured network doesn't degrade = badly as it gets busy. >>>=20 >>> Individual services will degrade as they need more bandwidth than is = available, but that sort of degredation is easy for the user to = understand. >>>=20 >>> The current status-quo is where good throughput at 80% utilization = may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is = 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging = around 20Mb and latency goes from 10ms to multiple seconds over this = range. >>>=20 >>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% = utilization would be 89Mb, 95% utilization would be 93Mb with latency = only going to 20ms >>>=20 >>> so there is a real problem to solve in the current status-quo, and = the question is if there is a way to quantify the problem and test for = it in ways that are repeatable, meaningful and understandable. >>>=20 >>> This is a place to avoid letting perfect be the enemy of good = enough. >>>=20 >>> If you ask even relatively technical people about the quality of a = network connection, they will talk to you about bandwidth and latency. >>>=20 >>> But if you talk to a networking expert, they don't even mention = that, they talk about signal strength, waveform distortion, bit error = rates, error correction mechanisms, signal regeneration, and probably = many other things that I don't know enough to even mention :-) >>>=20 >>>=20 >>> Everyone is already measuring peak bandwidth today, and that is = always going to be an important factor, so it will stay around. >>>=20 >>> So we need to show the degredation of the network, and I think that = either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will = give us meaningful numbers that people can understand and talk about, = while still being meaningful in the real world. >>=20 >> Maybe we should follow Neil and Martin=92s lead and consider = either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and = call the whole thing quality estimator or factor (as negative quality or = a factor < 0 intuitively shows a degradation). >=20 > That's debatable, if we call this a bufferbloat factor, the higher the = number the more bloat you suffer. >=20 > there's also the fact that the numeric differences if you do = small/large vs small/larger aren't impressive while large/small vs = larger/small look substantially different. This is a psychology = question. I am not in this for marketing ;) so I am not out for impressive = numbers ;) >=20 >> Also my bet is on the difference not on the ratio, why should people = with bad latency to begin with (satellite?) be more tolerant to further = degradation? I would assume that on a high latency link if at all the = =93budget=94 for further degradation might be smaller than on a low = latency link (reasoning: there might be a fixed latency budget for = acceptable latency for voip). >=20 > we'd need to check. The problem with difference is that it's far more = affected by the bandwidth of the connection than a ratio is. If your = measurement packets end up behind one extra data packet, your absolute = number will grow based on the transmission time required for that data = packet. >=20 > so I'm leaning towards the ratio making more sense when comparing = vastly different types of lines. But for a satellite link with hight 1st hop RTT the buffer bloat = factor is always going to look minuscule=85. (I still think the = difference is better) >=20 > As for th elatency budget idea, I don't buy that, if it was the case = then we would have no problems until latency exceeding the magic value = and then the service would fail entirely. No rather think of it that with increases latency pain = increases, not a threshold but a gradual change from good over = acceptable into painful... > What we have in practice is that buffering covers up a lot of latency, = as long as the jitter isn't bad. You may have a lag between what you say = and when someone on the other end interrupts you without much trouble = (as long as echo cancellation takes it into account) Remember transcontinental long distance calls? If the delay gets = too long communication suffers especially in real time applications like = voip. >=20 >>> which of the two is more useful is something that we would need to = get a bunch of people with different speed lines to report and see which = is affected less by line differences and distance to target. >>=20 >> Or make sure we always measure against the closest target (which = with satellite might still be far away)? >=20 > It's desirable to test against the closest target to reduce the impact = on the Internet overall, but ideally the quality measurement would not = depend on how far away the target is. No the =93quality=94 will be most affected by the bottleneck = link, but the more hops we accumulate the more variance we pick up and = the more measurements we need to reach an acceptable confidence in our = data... Best Regards=09 Sebastian >=20 > If you live in Silicon Valley, you are very close to a lot of good = targets, if you live in outer mongolia (or on a farm in the midwestern = US) you are a long way from any target, but we don't want the = measurement to change a lot, because the problem is probably in the = first couple of hops (absent a Verizon/Level3 type peering problem :-) >=20 > David Lang