From: Jim Gettys <jg@freedesktop.org>
To: "Livingood, Jason" <Jason_Livingood@cable.comcast.com>
Cc: "cerowrt-devel@lists.bufferbloat.net"
<cerowrt-devel@lists.bufferbloat.net>,
bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation?
Date: Fri, 20 Mar 2015 09:48:28 -0400 [thread overview]
Message-ID: <CAGhGL2DjauqAdjxRmgU1sBEMs89YNvseo3XDt5QTn8x1wLFSCQ@mail.gmail.com> (raw)
In-Reply-To: <D1309C0B.FD59F%jason_livingood@cable.comcast.com>
[-- Attachment #1: Type: text/plain, Size: 6702 bytes --]
On Thu, Mar 19, 2015 at 3:58 PM, Livingood, Jason <
Jason_Livingood@cable.comcast.com> wrote:
> On 3/19/15, 1:11 PM, "Dave Taht" <dave.taht@gmail.com> wrote:
>
> >On Thu, Mar 19, 2015 at 6:53 AM, <dpreed@reed.com> wrote:
> >> How many years has it been since Comcast said they were going to fix
> >>bufferbloat in their network within a year?
>
> I¹m not sure anyone ever said it¹d take a year. If someone did (even if it
> was me) then it was in the days when the problem appeared less complicated
> than it is and I apologize for that. Let¹s face it - the problem is
> complex and the software that has to be fixed is everywhere. As I said
> about IPv6: if it were easy, it¹d be done by now. ;-)
>
I think this was the hope that the buffer size control feature in Docsis
could at least be used to cut bufferbloat down to the "traditional" 100ms
level, as I remember the sequence of events. But reality intervened: buggy
implementations by too many vendors, is what I remember hearing from Rich
Woundy.
>
> >>It's almost as if the cable companies don't want OTT video or
> >>simultaneous FTP and interactive gaming to work. Of course not. They'd
> >>never do that.
>
> Sorry, but that seems a bit unfair. It flies in the face of what we have
> done and are doing. We¹ve underwritten some of Dave¹s work, we got
> CableLabs to underwrite AQM work, and I personally pushed like heck to get
> AQM built into the default D3.1 spec (had CTO-level awareness & support,
> and was due to Greg White¹s work at CableLabs). We are starting to field
> test D3.1 gear now, by the way. We made some bad bets too, such as trying
> to underwrite an OpenWRT-related program with ISC, but not every tactic
> will always be a winner.
>
> As for existing D3.0 gear, it¹s not for lack of trying. Has any DOCSIS
> network of any scale in the world solved it? If so, I have something to
> use to learn from and apply here at Comcast - and I¹d **love** an
> introduction to someone who has so I can get this info.
>
> But usually there are rational explanations for why something is still not
> done. One of them is that the at-scale operational issues are more
> complicated that some people realize. And there is always a case of
> prioritization - meaning things like running out of IPv4 addresses and not
> having service trump more subtle things like buffer bloat (and the effort
> to get vendors to support v6 has been tremendous).
>
> >I do understand there are strong forces against us, especially in the USA.
>
> I¹m not sure there are any forces against this issue. It¹s more a question
> of awareness - it is not apparent it is more urgent than other work in
> everyone¹s backlog. For example, the number of ISP customers even aware of
> buffer bloat is probably 0.001%; if customers aren¹t asking for it, the
> product managers have a tough time arguing to prioritize buffer bloat work
> over new feature X or Y.
>
I agree with Jason on this one. We have to take bufferbloat mainstream to
generate "market pull". I've been reluctant in the past before we had
solutions in hand: very early in this quest, Dave Clark noted:
"Yelling fire without having the exits marked" could be counter
productive. I think we have the exits marked now. Time to yell "Fire".
Even when you get to engineers in the organizations who build the
equipment, it's hard. First you have to explain that "more is not better",
and "some packet loss is good for you".
Day to day market pressures for other features mean that:
1) many/most of the engineers
don't see that as what they need to do in the next quarter/year.
2) their management don't see that working on it should take any of their
time. It won't help them sell the next set of gear.
***So we have to generate demand from the market.***
Now, I can see a couple ways to do this:
1) help expose the problem, preferably in a dead simple way that everyone
sees. If we can get Ookla to add a simple test to their test system, this
would be a good start. If not, other test sites are needed. Nice as
Netalyzer is, it a) tops out around 20Mbps, and b) buries the buffering
results among 50 other numbers.
2) Markets such as gaming are large, and very latency sensitive. Even
better, lots of geeks hang out there. So investing in educating that
submarket may help pull things through the system overall.
3) Competitive pressures can be very helpful: but this requires at least
one significant player in each product category to "get it". So these are
currently slow falling dominoes.
> One suggestion I have made to increase awareness is that there be a nice,
> web-based, consumer-friendly latency under load / bloat test that you
> could get people to run as they do speed tests today. (If someone thinks
> they can actually deliver this, I will try to fund it - ping me off-list.)
> I also think a better job can be done explaining buffer bloat - it¹s hard
> to make an Œelevator pitch¹ about it.
>
Yeah, the elevator pitch is hard, since a number of things around
bufferbloat are counter intuitive. I know, I've tried, and not really
succeeded. The best kinds of metaphors have been traffic related
("building parking lots at all the bottlenecks"), and explanations like
"packet loss is how the Internet enforces speed limits"
http://www.circleid.com/posts/20150228_packet_loss_how_the_internet_enforces_speed_limits/
.
>
> It reminds me a bit of IPv6 several years ago. Rather than saying in
> essence Œyou operators are dummies¹ for not already fixing this, maybe
> assume the engineers all Œget it¹ and what to do it.
Many/most practicing engineers are still unaware of it, or if they have
heard the word bufferbloat, still don't "get it" that they see
bufferbloat's effects all the time.
> Because we really do
> get it and want to do something about it. Then ask those operators what
> they need to convince their leadership and their suppliers and product
> managers and whomever else that it needs to be resourced more effectively
> (see above for example).
>
> We¹re at least part of the way there in DOCSIS networks. It is in D3.1 by
> default, and we¹re starting trials now. And probably within 18-24 months
> we won¹t buy any DOCSIS CPE that is not 3.1.
>
> The question for me is how and when to address it in DOCSIS 3.0.
>
We should talk at IETF.
>
> - Jason
>
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
[-- Attachment #2: Type: text/html, Size: 10980 bytes --]
next prev parent reply other threads:[~2015-03-20 13:48 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-16 20:35 [Cerowrt-devel] " Matt Taggart
2015-03-17 23:32 ` Valdis.Kletnieks
2015-03-18 4:34 ` David P. Reed
2015-03-18 6:26 ` Jonathan Morton
2015-03-18 19:38 ` JF Tremblay
2015-03-18 19:50 ` Jonathan Morton
2015-03-19 13:53 ` dpreed
2015-03-19 14:11 ` JF Tremblay
2015-03-19 15:38 ` dpreed
2015-03-19 15:40 ` Jim Gettys
2015-03-19 17:04 ` Michael Richardson
2015-03-19 17:14 ` Jonathan Morton
2015-03-19 17:11 ` Dave Taht
2015-03-19 19:58 ` [Cerowrt-devel] [Bloat] " Livingood, Jason
2015-03-19 20:29 ` dpreed
2015-03-19 23:18 ` Greg White
2015-03-20 8:18 ` MUSCARIELLO Luca IMT/OLN
2015-03-20 13:31 ` David P. Reed
2015-03-20 13:46 ` Sebastian Moeller
2015-03-20 14:05 ` MUSCARIELLO Luca IMT/OLN
2015-03-20 10:07 ` Sebastian Moeller
2015-03-20 13:50 ` [Cerowrt-devel] Latency Measurements in Speed Test suites (was: DOCSIS 3+ recommendation?) Rich Brown
2015-03-29 17:36 ` [Cerowrt-devel] [Bloat] " Pedro Tumusok
2015-03-30 7:06 ` Jonathan Morton
2015-03-20 13:57 ` [Cerowrt-devel] [Bloat] DOCSIS 3+ recommendation? Livingood, Jason
2015-03-20 14:08 ` David P. Reed
2015-03-20 14:14 ` MUSCARIELLO Luca IMT/OLN
2015-03-20 14:48 ` Matt Mathis
2015-03-20 18:04 ` Valdis.Kletnieks
2015-03-20 13:48 ` Jim Gettys [this message]
2015-03-20 14:11 ` Livingood, Jason
2015-03-20 14:54 ` Michael Welzl
2015-03-20 15:31 ` Jim Gettys
2015-03-20 15:39 ` Michael Welzl
2015-03-20 16:31 ` Jonathan Morton
2015-03-20 20:59 ` Michael Welzl
2015-03-20 23:47 ` David P. Reed
2015-03-21 0:08 ` Michael Welzl
2015-03-21 0:03 ` David Lang
2015-03-21 0:13 ` Steinar H. Gunderson
2015-03-21 0:25 ` David Lang
2015-03-21 0:34 ` Jonathan Morton
2015-03-21 0:38 ` David Lang
2015-03-21 0:43 ` Jonathan Morton
2015-03-22 4:15 ` Michael Welzl
2015-03-21 0:15 ` Michael Welzl
2015-03-21 0:29 ` David Lang
2015-03-22 4:10 ` Michael Welzl
2015-03-20 18:14 ` Jonathan Morton
2015-03-18 8:06 ` [Cerowrt-devel] " Sebastian Moeller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cerowrt-devel.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAGhGL2DjauqAdjxRmgU1sBEMs89YNvseo3XDt5QTn8x1wLFSCQ@mail.gmail.com \
--to=jg@freedesktop.org \
--cc=Jason_Livingood@cable.comcast.com \
--cc=bloat@lists.bufferbloat.net \
--cc=cerowrt-devel@lists.bufferbloat.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox