<p dir="ltr">A marketing number? Well, as we know, consumers respond best to "bigger is better" statistics. So anything reporting delay or ratio in the ways mentioned so far is doomed to failure - even if we convince the industry (or the regulators, more likely) to adopt them.</p>
<p dir="ltr">Another problem that needs solving is that marketing statistics tend to get gamed a lot. They must therefore be defined in such a way that gaming them is difficult without actually producing a corresponding improvement in the service. That's similar in nature to a security problem, by the way.</p>
<p dir="ltr">I have previously suggested defining a "responsiveness" measurement as a frequency. This is the inverse of latency, so it gets bigger as latency goes down. It would be relatively simple to declare that responsiveness is to be measured under a saturating load.</p>
<p dir="ltr">Trickier would be defining where in the world/network the measurement should be taken from and to. An ISP which hosted a test server on its internal network would hold an unfair advantage over other ISPs, so the sane solution is to insist that test servers are at least one neutral peering hop away from the ISP. ISPs that are geographically distant from the nearest test server would be disadvantaged, so test servers need to be provided throughout the densely populated parts of the world - say one per timezone and ten degrees of latitude if there's a major city in it.</p>
<p dir="ltr">At the opposite end of the measurement, we have the CPE supplied with the connection. That will of course be crucial to the upload half of the measurement.</p>
<p dir="ltr">While we're at it, we could try redefining bandwidth as an average, not a peak value. If the ISP has a "fair usage cap" of 300GB per 30 days, then they aren't allowed to claim an average bandwidth greater than 926kbps. National broadband availability initiatives can then be based on that figure.</p>
<p dir="ltr">- Jonathan Morton</p>