[Bloat] Please enter issues into the issue tracker - Issue system organisation needed.

Jim Gettys jg at freedesktop.org
Thu Feb 24 11:32:39 EST 2011


On 02/24/2011 10:00 AM, Fred Baker wrote:
> Thanks, Jim.
>
> One thing that would help me; I have been a fan of RFC 2309 and RFC 3168 for some time. I suspect is that between them any given queue should be manageable to a set depth; tests I have run suggest that with RED settings, average queue depth under load approximates min-threshold pretty closely, and ECN has the advantage that it manages to do so without dropping traffic. I suspect that this community's efforts will support that. Some thoughts:
>
> First, if the premise is wrong or there is a materially better solution, I'm all ears.

I certainly agree!

The conversations I've had with Van, (as I wrote up in my blog at: 
http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/ 
however, are that classic RED 93 has no chance of solving the problems 
we face in home routers and broadband, both due to its tuning problems 
and the high dynamic range of goodput of traffic, and the greatly 
variable kinds of traffic (which we see aggregated in core routers, 
where classic RED has been effective).

Certainly those network operators not enabling AQM who can should do so: 
and this is hurting corporate networks, some ISP's and the broadband 
deployment, as shown by Characterizing Residential Broadband Networks 
(http://broadband.mpi-sws.org/residential) by Dischinger et. al.  As far 
as data, the lack of tools is hurting.  Smokeping inside ALU made me 
suspect we were running without AQM internally (by spikey latency); I 
confirmed this by talking with our IT department (we do sophisticated 
classification for VOIP, etc.).  As Windows XP retires, this will become 
much more an issue, as single machines will be able to saturate pretty 
much any path using a single TCP connection.

But it's clear that RED 93 isn't enough for everything, and its 
shortcomings are in large part why many network operators have not 
enabled it.

So we need better AQM algorithms and extensive testing: as you may have 
seen, SFB just went into the Linux mainline this morning.

I'd like it if Kathie and Van would get their "RED in a different light" 
paper done aSAP, with its nRED algorithm so we can try it out as well. 
I've seen a somewhat later version than that which escaped onto the 
network, but it's not quite ready for public consumption.

As far as ECN goes, Steve Bauer and Robert Beverly have been studying 
ECN deployment 
(http://gettys.wordpress.com/2011/02/22/caida-workshop-aims-2011-bauer-and-beverly-ecn-results/); 
I'm encouraged that it seems to be finally deploying, but we need better 
tools to debug the remaining broken hardware and networks.  I'm hoping 
we can start using ECN soon in some environments (e.g. handsets) 
immediately, while take a more guarded view  about being able to use it 
everywhere. We'll know more as they get further into that study.

So I think there is a BCP that needs to exist (and need to be updated as 
better queue management algorithms deploy frequently) to help people 
understand what they can/should do immediately, and do in different 
circumstances as better AQM algorithms are implemented and deploy and 
ECN deploy.

I've toyed with the idea of whether a bufferbloat BOF at the Prague IETF 
should exist.  Your opinion would be valued here.

>
> Second, if the premise is correct, I'd like data that I can put in front of people to get them to configure it.



For a (lower bound) on the mess we are in, the Netalyzr data is the best 
I've seen for end users.

For broadband head ends, the Dishinger et. al. paper is the best I've seen.

Smokeping is really wonderful for monitoring and detecting potential 
bufferbloat; but most don't even know of its existence. (anyone want to 
help set smokeping up on bufferbloat.net?  the smokeping version at dsl 
reports is now quite dated).

The best path dignostic tool I'm aware of right now is pingplotter; we 
need more (and freely available tools) for diagnosing bufferbloat in 
both the public internet and private networks. mtr on Linux is a step or 
two above old fashioned traceroute and ping.  Steve Bauer knows how to 
modify traceroute/mtr to give us better ECN diagnosis.  I'd love it if 
someone would take mtr under their wing and push it forward for both 
bufferbloat detection and ECN testing.

Dave Clark arranged for the FCC tests (SamKnows) to contain a "latency 
under load" test; we've also talked to the Ookla (Speedtest.net) folks, 
who are interested in adding a test once they've finished their current 
infrastructure rollout.

But that still leaves us with poor tools; detecting you have a problem 
while you are provoking it is pretty easy: but right now, problem 
reports won't easily get sorted to which hop is the bottleneck due to 
the poor tools in inexpert hands, and we have problems everywhere from 
base host OS's, to home routers, to broadband gear, to some ISP's, to 3g 
wireless.

And some of these networks are complex aggregates: so while there are 3g 
problems I know, not only can RNC's be issues, but handsets (due to host 
bufferbloat) and the backhaul networks if they are not running AQM.

Thankfully some of these problems are really easy to demonstrate by 
anyone, as I did early in my blog sequence (e.g. host ethernet and 
wireless), and the engineers can see the results directly to their 
personal benefit.  I hope this will go a long way toward making 
believers of people.

But if we don't have better diagnostic tools quickly, the lack will 
generate a significantly higher support problem for ISP's; this seems 
bad to me, when much of the problem is inside homes and out of ISP's 
control.

>
> Third, there is a long-standing debate between Van and Sally on what units to use with min-threshold. Sally argues, or argued, in favor of byte count, as that correlates with time and biases mark/drop toward large datagrams, which is to say datagrams carrying data - which happen to be the datagrams that act as signals to Reno-et-al. Van argues, or argued, in favor of buffers, as what is being managed is the router's buffer resource. In our implementations, we provide for both options, and personally Sally's model makes more mathematical sense to me. Is there a "best practice" we can document? Is there a "best practice" we can document regarding min-threshold and max-threshold settings?

I'm really not the right person to ask; queue management has never been 
my area.  I've been a (network based) UI guy who stumbled into the 
problem, realising that the issues I was seeing was the kiss of death to 
a large class of applications (particularly the class of apps I get paid 
to worry about...)  I knew just enough to know what I saw was broken, 
and knew the right people to go ask about it.

>
> In private email, I shared an approach that might make tuning a little more reliable and not require a max-threshold. If there is material being developed - an updated version of RED-Lite, or experience with other approaches, anything that would allow us to make the AQM algorithm self-tuning would be of great interest. The result of any such self-tuning algorithm is that it should be usable with dropping or marking, should keep the line operating at full utilization as long as there is traffic to send (eg not depend on the line occasionally going idle), maintain the queue at a "reasonably low delay" level under normal circumstances, not result in a given session being forced to shut down entirely, and not result in multiple drops on the same session within the same RTT in the normal case.
>

We know of the following possibilities:
	o the SFB stuff
	o the work Van pointed us at for 802.11 that people are implementing in 
Linux as we speak (see: http://www.hamilton.ie/tianji_li/buffersizing.html)
	o Your suggestion, if you share it publicly
	o the nRED stuff of Kathie and Van, once they can get us a consistent 
document to work from.
	o other possibilities we haven't heard of yet.
	o evil TCP tricks, to control window sizes.

I think we need to play with all of these to sort through what really 
works.  The testing at scale part is going to be the most difficult 
problem (having dealt with this at OLPC, and essentially failed to 
understand the difficulty of diagnosing systems when testing at scale, 
the scars are still fresh on my personal back).

Kathie's warning I posted at 
http://gettys.wordpress.com/2011/02/10/goings-on-at-bufferbloat-net/ 
needs to be taken to heart by all.  Some of this is easy (the gross 
bufferectomies), but some is very subtle stuff indeed.

We also believe/know:
	o classic RED 93 won't work in many of the environments suffering 
really badly from bloat today.
	o RED 93 and the like *should* be configured everywhere we can 
productively do so.

And we don't yet know if ECN can be turned on, though we may know soon.

> There is one special case that I have wondered about from time to time; the impact of loss of SYNs or SYN-ACKs. The network I started thinking about that in what an African network that was seriously underprovisioned (they needed to, and eventually did, spend more money) on a satcom link. In essence, I wondered if there was a way that one could permit the first or second retransmission of a SYN as opposed to the initial one to get through in times of heavy load. The effect might be to let an existing session quiesce. That falls under "research" :-)

Yup.  There is lots of research to do; we should start tracking these 
items in the tracker as well, to help focus the effort and help funding 
and organisation.

>
> We have issues with at least some of our hardware in this; on the GSR, for example, queues are on the output card but IP processing is on the input card, meaning that we have lost all IP-related information by the time one would like to set ECN CE or inspect the DSCP value, and on the input card we have no real-time (microsecond-scale) way to inspect queue depth or integrated rate of a queue on the output card. The GSR is a mite elderly, but still widely used, and no, folks aren't going to replace cards at this stage in its life. So, ideas people have on working around such issues would be of interest.
>
>
You, along with most or all of the industry.  We're all in a very big 
bus together...

Sometimes the hardware will be impossible to fully fix; often software 
firmware upgrades can fix, but also often, the firmware is so old no-one 
can fix it any more.  Sometimes one can mitigate the headaches, as we're 
doing on home broadband by shaping traffic to make sure the buffers 
don't fill, or similar network configuration tricks, to ensure the 
bottleneck is not in the offending unfixable equipment, until it can be 
fixed.
		- Jim


>
>
> On Feb 24, 2011, at 7:19 AM, Jim Gettys wrote:
>
>> We have lots of different issues to track. We are uncovering more and more with time, and the responsibility for the issues is all over the Internet ecology.
>>
>> These issues include drivers in multiple operating systems, queue disciplines, OS distribution problems, broken networks, broadband gear, ISP's with broken configurations, routers with broken configurations, etc, etc, etc.  Many of the responsible organizations are completely unaware they have issues at the moment, and when they do wake up, the need to have a work list.  Serious as bufferbloat is, and generating tremendous support costs as it does, it is hidden among most organisations issue tracking as obscure, hard to explain problems, that have heretofore defied analysis.
>>
>> I think both for the sanity of the upstream open source projects and companies that depend on it, commercial software and hardware vendors, and our own sanity, it's time to start to keep track of these problems.
>>
>> A simple example is in the following mail, where Juliusz identified a bunch of Linux drivers with problems communicating back-pressure.
>> https://lists.bufferbloat.net/pipermail/bloat/2011-February/000036.html
>>
>> These driver bugs, of course, can and will be worked upstream in the project and/or responsible organisation; but from a practical point of view, these issues aren't really going to be fixed until people can actually take action on their own (by upgrading affected OS's, routers, broadband gear, etc. as appropriate).
>>
>> So I think we need to track bufferbloat issues in possibly a different way (and maybe with a bit different work flow) than a usual tracking system.
>>
>> First
>> =====
>> I think we need to capture what we know.  I encourage people to start entering issues in the bloat tracker found at:
>>
>> http://www.bufferbloat.net/projects/bloat/issues/new
>>
>> Note that redmine lets us move issues from one (sub)project to another, so we're best off capturing what we know immediately; we can sort and redeal later.
>>
>> Note: "We're all bozos on this glass bus, no stones allowed".  We know there are problems all over; issue descriptions should always be polite and constructive, please!
>>
>> Noting these issues will help people already involved (the mailing list had>  120 people the last I looked, from large numbers of organisations) take concrete action.  Issues buried in mail threads are too easy to lose.
>>
>> Second
>> ======
>> As this effort grows, we'll need to organise the result, and delegate it appropriately as the effort scales.
>>
>> Today, we're probably best off with a single project: but we hope certainly that won't be reasonable with time, possibly almost immediately.
>>
>> We installed Redmine in particular as it has a competent issue tracking system, as well as good (sub)project management, which can easily be delegated to others (one of the huge problems with Bugzilla or Trac is the lack of project management).
>>
>> If anyone is looking for a way to help bufferbloat and has experience with tracking systems on large, complex projects, I'd love to see someone organise this effort, and put some thought and structure into the categories, (sub)projects and work flow of issue states. I know from my OLPC experience just how important this can be, though this is a somewhat different situation.
>>
>>
>> 			Best regards,
>> 				- Jim
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
>




More information about the Bloat-devel mailing list