From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vc0-x234.google.com (mail-vc0-x234.google.com [IPv6:2607:f8b0:400c:c03::234]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4F2D821F52C; Sun, 27 Jul 2014 06:00:38 -0700 (PDT) Received: by mail-vc0-f180.google.com with SMTP id ij19so9634087vcb.25 for ; Sun, 27 Jul 2014 06:00:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:cc:content-type; bh=Uxpp0q8V7VlmU/iM19mtsJPAOD6aedg0taD4q6a/WQk=; b=IidvB1SczKlygSHYnlJe+qoMKMJVO7kxhL4WDbN4y5/M3pbELufMEqWzuxHFYdXubB +uOdO3y84FRtq75aMPHQlNLgpsAtW7enyChZ/NLYexdMVW8Cg5lLjlzEKADAVZBQzyQt jZFTcCPd3d4BOCw45vmTPxVR7zG0jFPzEl4utnlDEqfeb0AZnrx6yuluGeex0qlON31l 5iHT8iVWUIQhbkZaFc5QqTJS/J2ZEpa9NgwdQvkAsWb1Op7U6BQh8hyHQ+jY4PaXof/5 OatVe/D2jRiyVx9yj37EYfHunTHkUtbZIsBvKVt17WqCVz1cSyDGwtu0/o3C1ontYxLZ iZEQ== MIME-Version: 1.0 X-Received: by 10.52.98.201 with SMTP id ek9mr29430732vdb.35.1406466037136; Sun, 27 Jul 2014 06:00:37 -0700 (PDT) Received: by 10.52.142.132 with HTTP; Sun, 27 Jul 2014 06:00:37 -0700 (PDT) Received: by 10.52.142.132 with HTTP; Sun, 27 Jul 2014 06:00:37 -0700 (PDT) Date: Sun, 27 Jul 2014 16:00:37 +0300 Message-ID: From: Jonathan Morton To: Sebastian Moeller Content-Type: multipart/alternative; boundary=20cf307ca2c288432104ff2c6494 Cc: cerowrt-devel , bloat Subject: Re: [Cerowrt-devel] [Bloat] Marketing problems X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 27 Jul 2014 13:00:38 -0000 --20cf307ca2c288432104ff2c6494 Content-Type: text/plain; charset=UTF-8 A marketing number? Well, as we know, consumers respond best to "bigger is better" statistics. So anything reporting delay or ratio in the ways mentioned so far is doomed to failure - even if we convince the industry (or the regulators, more likely) to adopt them. Another problem that needs solving is that marketing statistics tend to get gamed a lot. They must therefore be defined in such a way that gaming them is difficult without actually producing a corresponding improvement in the service. That's similar in nature to a security problem, by the way. I have previously suggested defining a "responsiveness" measurement as a frequency. This is the inverse of latency, so it gets bigger as latency goes down. It would be relatively simple to declare that responsiveness is to be measured under a saturating load. Trickier would be defining where in the world/network the measurement should be taken from and to. An ISP which hosted a test server on its internal network would hold an unfair advantage over other ISPs, so the sane solution is to insist that test servers are at least one neutral peering hop away from the ISP. ISPs that are geographically distant from the nearest test server would be disadvantaged, so test servers need to be provided throughout the densely populated parts of the world - say one per timezone and ten degrees of latitude if there's a major city in it. At the opposite end of the measurement, we have the CPE supplied with the connection. That will of course be crucial to the upload half of the measurement. While we're at it, we could try redefining bandwidth as an average, not a peak value. If the ISP has a "fair usage cap" of 300GB per 30 days, then they aren't allowed to claim an average bandwidth greater than 926kbps. National broadband availability initiatives can then be based on that figure. - Jonathan Morton --20cf307ca2c288432104ff2c6494 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

A marketing number?=C2=A0 Well, as we know, consumers respon= d best to "bigger is better" statistics. So anything reporting de= lay or ratio in the ways mentioned so far is doomed to failure - even if we= convince the industry (or the regulators, more likely) to adopt them.

Another problem that needs solving is that marketing statist= ics tend to get gamed a lot.=C2=A0 They must therefore be defined in such a= way that gaming them is difficult without actually producing a correspondi= ng improvement in the service.=C2=A0 That's similar in nature to a secu= rity problem, by the way.

I have previously suggested defining a "responsiveness&= quot; measurement as a frequency. This is the inverse of latency, so it get= s bigger as latency goes down. It would be relatively simple to declare tha= t responsiveness is to be measured under a saturating load.

Trickier would be defining where in the world/network the me= asurement should be taken from and to. An ISP which hosted a test server on= its internal network would hold an unfair advantage over other ISPs, so th= e sane solution is to insist that test servers are at least one neutral pee= ring hop away from the ISP. ISPs that are geographically distant from the n= earest test server would be disadvantaged, so test servers need to be provi= ded throughout the densely populated parts of the world - say one per timez= one and ten degrees of latitude if there's a major city in it.

At the opposite end of the measurement, we have the CPE supp= lied with the connection. That will of course be crucial to the upload half= of the measurement.

While we're at it, we could try redefining bandwidth as = an average, not a peak value. If the ISP has a "fair usage cap" o= f 300GB per 30 days, then they aren't allowed to claim an average bandw= idth greater than 926kbps. National broadband availability initiatives can = then be based on that figure.

- Jonathan Morton

--20cf307ca2c288432104ff2c6494--