[Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?
David P. Reed
dpreed at reed.com
Fri Mar 20 09:31:29 EDT 2015
The mystery in most users' minds is that ping at a time when there is no load does tell them anything at all about why the network connection will such when their kid is uploading to youtube.
So giving them ping time is meaningless.
I think most network engineers think ping time is a useful measure of a badly bufferbloated system. It is not.
The only measure is ping time under maximum load of raw packets.
And that requires a way to test maximum load rtt.
There is no problem with that ... other than that to understand why and how that is relevant you have to understand Internet congestion control.
Having had to testify before CRTC about this, I learned that most access providers (the Canadian ones) claim that such measurements are never made as a measure of quality, and that you can calculate expected latency by using Little's lemma from average throughput. And that dropped packets are the right measure of quality of service.
Ookla ping time is useless in a context where even the "experts" wearing ties from the top grossing Internet firms are so confused. And maybe deliberately misleading on purpose... they had to be forced to provide any data they had about congestion in their networks by a ruling during the proceeding and then responded that they had no data - they never measured queueing delay and disputed that it mattered. The proper measure of congestion was throughput.
I kid you not.
So Ookla ping time is useless against such public ignorance.
That's completely wrong for
On Mar 20, 2015, MUSCARIELLO Luca IMT/OLN <luca.muscariello at orange.com> wrote:
>I agree. Having that ping included in Ookla would help a lot more
>
>Luca
>
>
>On 03/20/2015 12:18 AM, Greg White wrote:
>> Netalyzr is great for network geeks, hardly consumer-friendly, and
>even so
>> the "network buffer measurements" part is buried in 150 other
>statistics.
>> Why couldn't Ookla* add a simultaneous "ping" test to their
>throughput
>> test? When was the last time someone leaned on them?
>>
>>
>> *I realize not everyone likes the Ookla tool, but it is popular and
>about
>> as "sexy" as you are going to get with a network performance tool.
>>
>> -Greg
>>
>>
>>
>> On 3/19/15, 2:29 PM, "dpreed at reed.com" <dpreed at reed.com> wrote:
>>
>>> I do think engineers operating networks get it, and that Comcast's
>>> engineers really get it, as I clarified in my followup note.
>>>
>>> The issue is indeed prioritization of investment, engineering
>resources
>>> and management attention. The teams at Comcast in the engineering
>side
>>> have been the leaders in "bufferbloat minimizing" work, and I think
>they
>>> should get more recognition for that.
>>>
>>> I disagree a little bit about not having a test that shows the
>issue, and
>>> the value the test would have in demonstrating the issue to users.
>>> Netalyzer has been doing an amazing job on this since before the
>>> bufferbloat term was invented. Every time I've talked about this
>issue
>>> I've suggested running Netalyzer, so I have a personal set of
>comments
>> >from people all over the world who run Netalyzer on their home
>networks,
>>> on hotel networks, etc.
>>>
>>> When I have brought up these measurements from Netalyzr (which are
>not
>>> aimed at showing the problem as users experience) I observe an
>>> interesting reaction from many industry insiders: the results are
>not
>>> "sexy enough for stupid users" and also "no one will care".
>>>
>>> I think the reaction characterizes the problem correctly - but the
>second
>>> part is the most serious objection. People don't need a measurement
>>> tool, they need to know that this is why their home network sucks
>>> sometimes.
>>>
>>>
>>>
>>>
>>>
>>> On Thursday, March 19, 2015 3:58pm, "Livingood, Jason"
>>> <Jason_Livingood at cable.comcast.com> said:
>>>
>>>> On 3/19/15, 1:11 PM, "Dave Taht" <dave.taht at gmail.com> wrote:
>>>>
>>>>> On Thu, Mar 19, 2015 at 6:53 AM, <dpreed at reed.com> wrote:
>>>>>> How many years has it been since Comcast said they were going to
>fix
>>>>>> bufferbloat in their network within a year?
>>>> I¹m not sure anyone ever said it¹d take a year. If someone did
>(even if
>>>> it
>>>> was me) then it was in the days when the problem appeared less
>>>> complicated
>>>> than it is and I apologize for that. Let¹s face it - the problem is
>>>> complex and the software that has to be fixed is everywhere. As I
>said
>>>> about IPv6: if it were easy, it¹d be done by now. ;-)
>>>>
>>>>>> It's almost as if the cable companies don't want OTT video or
>>>>>> simultaneous FTP and interactive gaming to work. Of course not.
>They'd
>>>>>> never do that.
>>>> Sorry, but that seems a bit unfair. It flies in the face of what we
>have
>>>> done and are doing. We¹ve underwritten some of Dave¹s work, we got
>>>> CableLabs to underwrite AQM work, and I personally pushed like heck
>to
>>>> get
>>>> AQM built into the default D3.1 spec (had CTO-level awareness &
>support,
>>>> and was due to Greg White¹s work at CableLabs). We are starting to
>field
>>>> test D3.1 gear now, by the way. We made some bad bets too, such as
>>>> trying
>>>> to underwrite an OpenWRT-related program with ISC, but not every
>tactic
>>>> will always be a winner.
>>>>
>>>> As for existing D3.0 gear, it¹s not for lack of trying. Has any
>DOCSIS
>>>> network of any scale in the world solved it? If so, I have
>something to
>>>> use to learn from and apply here at Comcast - and I¹d **love** an
>>>> introduction to someone who has so I can get this info.
>>>>
>>>> But usually there are rational explanations for why something is
>still
>>>> not
>>>> done. One of them is that the at-scale operational issues are more
>>>> complicated that some people realize. And there is always a case of
>>>> prioritization - meaning things like running out of IPv4 addresses
>and
>>>> not
>>>> having service trump more subtle things like buffer bloat (and the
>>>> effort
>>>> to get vendors to support v6 has been tremendous).
>>>>
>>>>> I do understand there are strong forces against us, especially in
>the
>>>>> USA.
>>>> I¹m not sure there are any forces against this issue. It¹s more a
>>>> question
>>>> of awareness - it is not apparent it is more urgent than other work
>in
>>>> everyone¹s backlog. For example, the number of ISP customers even
>aware
>>>> of
>>>> buffer bloat is probably 0.001%; if customers aren¹t asking for it,
>the
>>>> product managers have a tough time arguing to prioritize buffer
>bloat
>>>> work
>>>> over new feature X or Y.
>>>>
>>>> One suggestion I have made to increase awareness is that there be a
>>>> nice,
>>>> web-based, consumer-friendly latency under load / bloat test that
>you
>>>> could get people to run as they do speed tests today. (If someone
>thinks
>>>> they can actually deliver this, I will try to fund it - ping me
>>>> off-list.)
>>>> I also think a better job can be done explaining buffer bloat -
>it¹s
>>>> hard
>>>> to make an Œelevator pitch¹ about it.
>>>>
>>>> It reminds me a bit of IPv6 several years ago. Rather than saying
>in
>>>> essence Œyou operators are dummies¹ for not already fixing this,
>maybe
>>>> assume the engineers all Œget it¹ and what to do it. Because we
>really
>>>> do
>>>> get it and want to do something about it. Then ask those operators
>what
>>>> they need to convince their leadership and their suppliers and
>product
>>>> managers and whomever else that it needs to be resourced more
>>>> effectively
>>>> (see above for example).
>>>>
>>>> We¹re at least part of the way there in DOCSIS networks. It is in
>D3.1
>>>> by
>>>> default, and we¹re starting trials now. And probably within 18-24
>months
>>>> we won¹t buy any DOCSIS CPE that is not 3.1.
>>>>
>>>> The question for me is how and when to address it in DOCSIS 3.0.
>>>>
>>>> - Jason
>>>>
>>>>
>>>>
>>>>
>>>
-- Sent with K-@ Mail - the evolution of emailing.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20150320/d494f1c3/attachment-0002.html>
More information about the Cerowrt-devel
mailing list