[Codel] [Cerowrt-devel] [PATCH 2/2] codel: reduce count after exiting dropping state after one maxpacket

dpreed at reed.com dpreed at reed.com
Fri Aug 24 11:29:39 EDT 2012


One other thing that worries me is this:  if you have 100 TCP connections all trying to get through a congested link, each "drop" signals only one of those connections that there is a need to cut the receive window size (by a factor of 2).   So if each of the 100 connections has a window size that corresponds to the bitrate-delay-product of the channel, you need on the order of 100 log2(100) ~ 650 drops to "fairly" slow all of the 100 connections down to each have a bitrate-delay-product equal to the 1/100 fair share, at minimum.  Because of this: you have to signal all 100 independent end-to-end control loops enough times to slow their rate.
 
Avoiding "drops" means that it takes much longer to get to the right, relatively fair, operating point.   And even if you are near the right operating point already, being selective on drops is pretty "unfair" in a statistical sense.
 
Also, remember that "fair" should not try to give capacity to TCP connections that are already constrained by other bottleneck links.   You can't give them capacity, usefully - doing so creates buffer "inflation" pressures on those links, which causes unnecessary oscillation in the other links.
 
Remember that the Internet is not at a "stable equilibrium" ever - "waves" flow through it, and the goal is to not "amplify" the waves.   The end-to-end control loops are the damping functions, while the link queue management (with drops and ECN) merely provide signals and keep latency short.  They can't "damp" traffic and may amplify if they try misguidedly to damp out waves.
 
-----Original Message-----
From: "Dave Taht" <dave.taht at gmail.com>
Sent: Friday, August 24, 2012 4:08am
To: dpreed at reed.com
Cc: "Dave Täht" <dave.taht at bufferbloat.net>, cerowrt-devel at lists.bufferbloat.net, codel at lists.bufferbloat.net
Subject: Re: [Cerowrt-devel] [PATCH 2/2] codel: reduce count after exiting dropping state after one maxpacket



On Thu, Aug 23, 2012 at 12:17 PM,  <dpreed at reed.com> wrote:
> I would be cautious about cutting down the information rate to TCP.
> Remember the collection of TCP pairs are already seeking this information.
> You risk driving TCP's control loop crazy, especially if you end up
> "resonating" with it with positive feedback.

We have been wrestling what we call s3 (re-entering dropping state)
for a very long time. From k&v's current test ns2 code:

https://github.com/dtaht/ns2/blob/master/queue/codel.cc

...

 r = dodeque();
 dropping_ = 1; // mdt: thought was to to change this to r.ok
 // and do something mildly different below

 // If min went above target close to when it last went below,
 // assume that the drop rate that controlled the queue on the
 // last cycle is a good starting point to control it now.
 // Unfortunately, this is not easy to get at. "n*interval" is
 // used to indicate "close to" and values from 2 - 16 have been
 // used. A value of 8 has worked well. The count value doesn't
 // decay well enough with this control law. Until a better one
 // is devised, the below is a hack that appears to improve things.

 if(count_ > 2 && now- drop_next_ < 8*interval_) {
 count_ = count_ - 2;
 // kmn decay tests // mdt - floating point is not an option!
 if(count_ > 126) count_ = 0.9844 * (count_ + 2);

 } else
 count_ = 1;
 drop_next_ = control_law(now);
 }
 return (r.p);



>
>
>
> Rather than focus on "precision" or "stability" at the contended resource,
> remember the point is to minimize latency on an end-to-end basis so that all
> the TCP's can have tight control loops.  Thus there is no "goal" to making
> the contended queue maximized in capacity.  The goal is that it should drain
> quickly, which means keeping it small!   The goal is (as the codel paper
> says) to use the *minimum* latency as the estimator, not the "average" or
> weighted (delayed) average.
>
>
>
> Keep focused on the control theory principles, NOT on throughput
> maximization.  If you push toward maximizing throughput at any cost, even
> with codel, you will end up with *large* excursions in latency that stick
> for a long time.  I.e. bufferbloat.

I might argue from a control theory perspective that we're not banging
on or observing all the right things.

I was dubious about the value of the 2nd patch, given the more
aggressive decline in drop probability I already had vs a vs the ns2
code, (which was admittedly far less aggressive than the original
linux code). Might retain
the idea, flip it around some more, certainly plan to scratch head a
lot... was glad to figure out newton didn't run in reverse that well,
tho.

Anyway, in my limited testing today the 3rd version of the patches
worked fine at short RTTs and at a range of bandwidths from 1Mbit to
100Mbit, but that's the easy part and plenty more fiddling remains
with more streams and longer RTTs.

http://snapon.lab.bufferbloat.net/~d/fq_codel/rtts.png

(It's also somewhat weird getting used to nearly perfect
bi-directional utilization all the time in fq_codel - no matter what
underlying version
of codel is there)

Incidentally there is this 10 second periodicity to my captures that bothers me

http://snapon.lab.bufferbloat.net/~d/fq_codel/alwaysthoughtthese10secondintervalswereaflawofthecapturetoolbut.png

Anyway



>
> -----Original Message-----
> From: "Dave Täht" <dave.taht at bufferbloat.net>
> Sent: Thursday, August 23, 2012 9:58am
> To: cerowrt-devel at lists.bufferbloat.net
> Subject: [Cerowrt-devel] [PATCH 2/2] codel: reduce count after exiting
> dropping state after one maxpacket
>
> From: Dave Taht <dave.taht at bufferbloat.net>
>
> At a knife's edge, where we are rapidly entering and existing
> a dropping state, seek lower to find the optimimum.
> ---
> include/net/codel.h | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/include/net/codel.h b/include/net/codel.h
> index dbfccb7..5e85632 100644
> --- a/include/net/codel.h
> +++ b/include/net/codel.h
> @@ -342,6 +342,9 @@ static struct sk_buff *codel_dequeue(struct Qdisc *sch,
> vars->drop_next = codel_control_law(vars->drop_next,
> params->interval,
> vars->rec_inv_sqrt);
> + } else { /* we dropped out of the dropping state in 1 pkt */
> + vars->count = vars->count > 1 ? vars->count - 1 : 1;
> + codel_Newton_step(vars);
> }
> }
> end:
> --
> 1.7.9.5
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
>
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>



-- 
Dave Täht
http://www.bufferbloat.net/projects/cerowrt/wiki - "3.3.8-17 is out
with fq_codel!"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/codel/attachments/20120824/d77c4940/attachment-0002.html>


More information about the Codel mailing list