[Cerowrt-devel] [Bloat] Check out www.speedof.me - no Flash
Sebastian Moeller
moeller0 at gmx.de
Sat Jul 26 18:00:50 EDT 2014
Hi David,
On Jul 26, 2014, at 22:53 , David Lang <david at lang.hm> wrote:
> On Sat, 26 Jul 2014, Sebastian Moeller wrote:
>
>> On Jul 26, 2014, at 01:26 , David Lang <david at lang.hm> wrote:
>>
>>> But I think that what we are seeing from teh results of the bufferbloat work is that a properly configured network doesn't degrade badly as it gets busy.
>>>
>>> Individual services will degrade as they need more bandwidth than is available, but that sort of degredation is easy for the user to understand.
>>>
>>> The current status-quo is where good throughput at 80% utilization may be 80Mb, at 90% utilization it may be 85Mb, at 95% utilization it is 60Mb, and at 100% utilization it pulses between 10Mb and 80Mb averaging around 20Mb and latency goes from 10ms to multiple seconds over this range.
>>>
>>> With BQL and fw_codel, 80% utilization would still be 80Mb, 90% utilization would be 89Mb, 95% utilization would be 93Mb with latency only going to 20ms
>>>
>>> so there is a real problem to solve in the current status-quo, and the question is if there is a way to quantify the problem and test for it in ways that are repeatable, meaningful and understandable.
>>>
>>> This is a place to avoid letting perfect be the enemy of good enough.
>>>
>>> If you ask even relatively technical people about the quality of a network connection, they will talk to you about bandwidth and latency.
>>>
>>> But if you talk to a networking expert, they don't even mention that, they talk about signal strength, waveform distortion, bit error rates, error correction mechanisms, signal regeneration, and probably many other things that I don't know enough to even mention :-)
>>>
>>>
>>> Everyone is already measuring peak bandwidth today, and that is always going to be an important factor, so it will stay around.
>>>
>>> So we need to show the degredation of the network, and I think that either ping(loaded)-ping(unloaded) or ping(loaded)/ping(unloaded) will give us meaningful numbers that people can understand and talk about, while still being meaningful in the real world.
>>
>> Maybe we should follow Neil and Martin’s lead and consider either ping(unloaded)-ping(loaded) or ping(unloaded)/ping(loaded) and call the whole thing quality estimator or factor (as negative quality or a factor < 0 intuitively shows a degradation).
>
> That's debatable, if we call this a bufferbloat factor, the higher the number the more bloat you suffer.
>
> there's also the fact that the numeric differences if you do small/large vs small/larger aren't impressive while large/small vs larger/small look substantially different. This is a psychology question.
I am not in this for marketing ;) so I am not out for impressive numbers ;)
>
>> Also my bet is on the difference not on the ratio, why should people with bad latency to begin with (satellite?) be more tolerant to further degradation? I would assume that on a high latency link if at all the “budget” for further degradation might be smaller than on a low latency link (reasoning: there might be a fixed latency budget for acceptable latency for voip).
>
> we'd need to check. The problem with difference is that it's far more affected by the bandwidth of the connection than a ratio is. If your measurement packets end up behind one extra data packet, your absolute number will grow based on the transmission time required for that data packet.
>
> so I'm leaning towards the ratio making more sense when comparing vastly different types of lines.
But for a satellite link with hight 1st hop RTT the buffer bloat factor is always going to look minuscule…. (I still think the difference is better)
>
> As for th elatency budget idea, I don't buy that, if it was the case then we would have no problems until latency exceeding the magic value and then the service would fail entirely.
No rather think of it that with increases latency pain increases, not a threshold but a gradual change from good over acceptable into painful...
> What we have in practice is that buffering covers up a lot of latency, as long as the jitter isn't bad. You may have a lag between what you say and when someone on the other end interrupts you without much trouble (as long as echo cancellation takes it into account)
Remember transcontinental long distance calls? If the delay gets too long communication suffers especially in real time applications like voip.
>
>>> which of the two is more useful is something that we would need to get a bunch of people with different speed lines to report and see which is affected less by line differences and distance to target.
>>
>> Or make sure we always measure against the closest target (which with satellite might still be far away)?
>
> It's desirable to test against the closest target to reduce the impact on the Internet overall, but ideally the quality measurement would not depend on how far away the target is.
No the “quality” will be most affected by the bottleneck link, but the more hops we accumulate the more variance we pick up and the more measurements we need to reach an acceptable confidence in our data...
Best Regards
Sebastian
>
> If you live in Silicon Valley, you are very close to a lot of good targets, if you live in outer mongolia (or on a farm in the midwestern US) you are a long way from any target, but we don't want the measurement to change a lot, because the problem is probably in the first couple of hops (absent a Verizon/Level3 type peering problem :-)
>
> David Lang
More information about the Cerowrt-devel
mailing list