[Cerowrt-devel] [Bloat] fq_codel is two years old
dpreed at reed.com
dpreed at reed.com
Fri May 16 16:17:07 EDT 2014
I agree with you Jim about being careful with QoS. That's why Andrew Odlyzko proposed the experiment with exactly two classes, and proposed it as an *experiment*. So many researchers and IETF members seem to think we should just turn on diffserv and everything will work great... I've seen very senior members of IETF actually propose diffserv become a provider-wide standard as soon as possible. I suppose they have a couple of ns2 runs that show "nothing can go wrong". :-)
(that's why I'm so impressed by the fq_codel work - it's more than just simulation, but has been tested and more or less stressed in real life, yet it is quite simple).
I don't agree with the idea that switches alone can solve global system problems by themselves. That's why the original AIMD algorithms use packet drops as signals, but make the endpoints responsible for managing congestion. The switches have nothing to do with the AIMD algorithm, they just create the control inputs.
So it is kind of telling that Valdis cites a totally "switch-centric" view from NANOG's perspective. It's not the job of switches to manage congestion, just as it is not the job of endpoints to program switches. There's a separation of concerns.
The simpler observation would be "if you are a switch, there is NOTHING you can do to stop congestion. Even dropping packets doesn't ameliorate congestion. However, if you are a switch there are some things you can tell the endpoints, in particular the receiving endpoints of flows traveling across the switch, about the local 'collision' of packets trying to get through the switch at the same time."
Since the Internet end-to-end protocols are "receiver controlled" (TCP's receive window is what controls the sender's flow, but it is set by the receiver), the locus of decision making is the collection of receivers.
Buffering is not the real issue - the issue is the frequency with which the packets of all the flows going through a particular switch "collide". The control problem is to make that frequency of collision quite small.
The nice thing about packet drops is that collisions are remediated immediately, rather than creating sustained bottlenecks that increase the "collision cross section" of that switch, increasing the likelihood of collisions in the switch dramatically. Replacing a collided/dropped packet with a much smaller "token" that goes on to the receiver would keep the collision cross section from growing, but provide better samples of collision info to the receiver. For fairness, you want all packets involved in a collision to carry information, and ideally all "near collisions" to also carry information about near collisions.
A collision in this is simply defined: a packet that enters a switch collides with any other packets that have not completed traversal of the switch when the packet arrives is considered to have collided with those packets.
You can expand packets' virtual time in the switch by thinking of them as "virtually still in the switch" for some number of bit times after they exit. Then a "near collision" happens between a packet and any packets that are still virtually in the switch. Near collisions are signals that can keep the system inside the "ballistic region" of the phase space.
(you can track near collisions by a little memory on each outbound link state - and even use Bloom Filters to quickly detect collisions, but that is for a different lecture).
Please steal this idea and develop it.
On Friday, May 16, 2014 12:06pm, "Jim Gettys" <jg at freedesktop.org> said:
On Fri, May 16, 2014 at 10:52 AM, <[Valdis.Kletnieks at vt.edu](mailto:Valdis.Kletnieks at vt.edu)> wrote:
On Thu, 15 May 2014 16:32:55 -0400, [dpreed at reed.com](mailto:dpreed at reed.com) said:
> And in the end of the day, the problem is congestion, which is very
> non-linear. There is almost no congestion at almost all places in the Internet
> at any particular time. You can't fix congestion locally - you have to slow
> down the sources across all of the edge of the Internet, quickly.
There's a second very important point that somebody mentioned on the NANOG
list a while ago:
If the local router/net/link/whatever isn't congested, QoS cannot do anything
to improve life for anybody.
If there *is* congestion, QoS can only improve your service to the normal
uncongested state - and it can *only do so by making somebody else's experience
The somebody else might be "you", in which life is much better. once you have the concept of flows (at some level of abstraction), you can make more sane choices.
Personally, I've mostly been interested in QOS in the local network: as "hints", for example, that it is worth more aggressive bidding for transmit opportunities in WiFi, for example to ensure my VOIP, teleconferencing, gaming, music playing and other actually real time packets get priority over bulk data (which includes web traffic), and may need access to the medium sooner than for routine applications or scavenger applications.
Whether it should have any use beyond the scope of the network that I control is less than clear to me, for the reasons you state; having my traffic screw up other people's traffic isn't high on my list of "good ideas".
The other danger of QOS is that applications may "game" its use of QOS, to get preferential treatment, so each network (and potentially hosts) need to be able to control its own policy, and detect (and potentially punish) transgressors. Right now, we don't have those detectors or controls in place (and how to inform naive users that their applications are asking for priority service for no good reason) is another unanswered question.
This gaming danger (and a UI to enable policy to be set), make me think it's something we're going to have to work through carefully.
Cerowrt-devel mailing list
[Cerowrt-devel at lists.bufferbloat.net](mailto:Cerowrt-devel at lists.bufferbloat.net)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Cerowrt-devel