From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-f47.google.com (mail-wg0-f47.google.com [74.125.82.47]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id B1D04201B46 for ; Sun, 6 May 2012 22:51:23 -0700 (PDT) Received: by wgbfa7 with SMTP id fa7so3612231wgb.28 for ; Sun, 06 May 2012 22:51:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=3kyEvDLfgjtlPyprBckfGNq4xeI918T+rW4af8A5KZE=; b=DggriDYSABTuE936mRjq11HL5Pr3gSBKO+CyJXpXT2WrsVDzniXRmOKL4gReEtbc80 iIzUpaoExvoX0q2kKTpejtBJfmu6EpBlByYHjPneKy4XU1F8iSzY+ujMUelshtYQF700 8f3bxnsG5sqjEvAtkTl1W4DkSlkn7vOBjfEFVBuU5ZSRXw2GuNKWPNfOKRVP+FQJWqIk gD0BuGyO1bOsLcCdCjDhYQ3jxp541mtxg0ggQz74uwIvOlx6HWledrVRIkoZWEe47cZJ 3r/M/AprHmJNmHrmxxGMWEkRaOMk//SGvaIb2QDjo3d88lH2NS6ck4syMczX0nEvDG4U eybA== MIME-Version: 1.0 Received: by 10.216.142.226 with SMTP id i76mr755022wej.28.1336369881690; Sun, 06 May 2012 22:51:21 -0700 (PDT) Received: by 10.223.112.66 with HTTP; Sun, 6 May 2012 22:51:21 -0700 (PDT) In-Reply-To: <1336368957-17586-1-git-send-email-dave.taht@bufferbloat.net> References: <1336368957-17586-1-git-send-email-dave.taht@bufferbloat.net> Date: Sun, 6 May 2012 22:51:21 -0700 Message-ID: From: Dave Taht To: =?ISO-8859-1?Q?Dave_T=E4ht?= Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: codel@lists.bufferbloat.net Subject: Re: [Codel] [PATCH v9] codel: Controlled Delay AQM X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 May 2012 05:51:24 -0000 On Sun, May 6, 2012 at 10:35 PM, Dave T=E4ht wr= ote: > This version (9) adds support for various forms of decrease in s3, > in the form of the module parameters gentle and decrease_method. This one (or the one prior) appears to have introduced a wrap-around error. I see count never get much above 3000, and drop to 1 a lot. When that happens, delay goes boom. Prior to that it's good... Over to eric.... > > It defaults to the algorithm as described in the original presentation. > > v1: Original implementation - Dave Taht > v2: Working code - Corrections for ktime - Dave Taht > v3: 32 bit support and port to net-next - Eric Dumazet > v4: 16 bit precision for inv sqrt and cache - Dave Taht > v5: Kernel cleanup and full precision - Eric Dumazet > v6: Dump Stats support added - Eric Dumazet > v7: Complete rewrite for u32 values - Eric Dumazet > v8: Stats and timing added, 64 bit prescale improved - Eric Dumazet > v9: debated functionality moved to isolated routine - Dave Taht > --- > =A0include/linux/pkt_sched.h | =A0 25 +++ > =A0net/sched/Kconfig =A0 =A0 =A0 =A0 | =A0 11 ++ > =A0net/sched/Makefile =A0 =A0 =A0 =A0| =A0 =A01 + > =A0net/sched/sch_codel.c =A0 =A0 | =A0463 +++++++++++++++++++++++++++++++= ++++++++++++++ > =A04 files changed, 500 insertions(+) > =A0create mode 100644 net/sched/sch_codel.c > > diff --git a/include/linux/pkt_sched.h b/include/linux/pkt_sched.h > index 0d5b793..f62141e 100644 > --- a/include/linux/pkt_sched.h > +++ b/include/linux/pkt_sched.h > @@ -633,4 +633,29 @@ struct tc_qfq_stats { > =A0 =A0 =A0 =A0__u32 lmax; > =A0}; > > +/* CODEL */ > + > +enum { > + =A0 =A0 =A0 TCA_CODEL_UNSPEC, > + =A0 =A0 =A0 TCA_CODEL_TARGET, > + =A0 =A0 =A0 TCA_CODEL_LIMIT, > + =A0 =A0 =A0 TCA_CODEL_MINBYTES, > + =A0 =A0 =A0 TCA_CODEL_INTERVAL, > + =A0 =A0 =A0 __TCA_CODEL_MAX > +}; > + > +#define TCA_CODEL_MAX =A0(__TCA_CODEL_MAX - 1) > + > +struct tc_codel_xstats { > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 count; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 delay; /* time elapsed since next= packet was queued (in us) */ > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 drop_next; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 drop_overlimit; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 dropping; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 state1; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 state2; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 state3; > + =A0 =A0 =A0 __u32 =A0 =A0 =A0 =A0 =A0 states; > +}; > + > =A0#endif > diff --git a/net/sched/Kconfig b/net/sched/Kconfig > index 2590e91..8106c42 100644 > --- a/net/sched/Kconfig > +++ b/net/sched/Kconfig > @@ -250,6 +250,17 @@ config NET_SCH_QFQ > > =A0 =A0 =A0 =A0 =A0If unsure, say N. > > +config NET_SCH_CODEL > + =A0 =A0 =A0 tristate "Controlled Delay AQM (CODEL)" > + =A0 =A0 =A0 help > + =A0 =A0 =A0 =A0 Say Y here if you want to use the Controlled Delay (COD= EL) > + =A0 =A0 =A0 =A0 packet scheduling algorithm. > + > + =A0 =A0 =A0 =A0 To compile this driver as a module, choose M here: the = module > + =A0 =A0 =A0 =A0 will be called sch_codel. > + > + =A0 =A0 =A0 =A0 If unsure, say N. > + > =A0config NET_SCH_INGRESS > =A0 =A0 =A0 =A0tristate "Ingress Qdisc" > =A0 =A0 =A0 =A0depends on NET_CLS_ACT > diff --git a/net/sched/Makefile b/net/sched/Makefile > index dc5889c..41130b5 100644 > --- a/net/sched/Makefile > +++ b/net/sched/Makefile > @@ -36,6 +36,7 @@ obj-$(CONFIG_NET_SCH_DRR) =A0 =A0 +=3D sch_drr.o > =A0obj-$(CONFIG_NET_SCH_MQPRIO) =A0 +=3D sch_mqprio.o > =A0obj-$(CONFIG_NET_SCH_CHOKE) =A0 =A0+=3D sch_choke.o > =A0obj-$(CONFIG_NET_SCH_QFQ) =A0 =A0 =A0+=3D sch_qfq.o > +obj-$(CONFIG_NET_SCH_CODEL) =A0 =A0+=3D sch_codel.o > > =A0obj-$(CONFIG_NET_CLS_U32) =A0 =A0 =A0+=3D cls_u32.o > =A0obj-$(CONFIG_NET_CLS_ROUTE4) =A0 +=3D cls_route.o > diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c > new file mode 100644 > index 0000000..a9e6383 > --- /dev/null > +++ b/net/sched/sch_codel.c > @@ -0,0 +1,463 @@ > +/* > + * net/sched/sch_codel.c =A0 =A0 =A0 A Codel implementation > + * > + * =A0 =A0 This program is free software; you can redistribute it and/or > + * =A0 =A0 modify it under the terms of the GNU General Public License > + * =A0 =A0 as published by the Free Software Foundation; either version > + * =A0 =A0 2 of the License, or (at your option) any later version. > + * > + * Codel, the COntrolled DELay Queueing discipline > + * Based on ns2 simulation code presented by Kathie Nichols > + * > + * Authors: =A0 =A0Dave T=E4ht > + * =A0 =A0 =A0 =A0 =A0 =A0 Eric Dumazet > + */ > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +/* > + * codel uses a 1024 nsec clock, encoded in u32 > + */ > +typedef u32 codel_time_t; > +#define CODEL_SHIFT 10 > + > +static u32 gentle =3D 0; > +static u32 decrease_method =3D 0; > +module_param(gentle, uint, 0644); > +module_param(decrease_method, uint, 0644); > +MODULE_PARM_DESC(gentle,"Gently increment count in massive drop state"); > +MODULE_PARM_DESC(decrease_method,"Various means of decreasing count"); > + > + > +static codel_time_t codel_get_time(void) > +{ > + =A0 =A0 =A0 u64 ns =3D ktime_to_ns(ktime_get()); > + > + =A0 =A0 =A0 return ns >> CODEL_SHIFT; > +} > + > +#define codel_time_after(a, b) =A0((int)(a) - (int)(b) > 0) > +#define codel_time_after_eq(a, b) ((int)(a) - (int)(b) >=3D 0) > +#define codel_time_before(a, b) =A0 =A0 =A0 =A0 ((int)(a) - (int)(b) < 0= ) > +#define codel_time_before_eq(a, b) ((int)(a) - (int)(b) <=3D 0) > + > +#define MS2TIME(a) ((a * NSEC_PER_MSEC) >> CODEL_SHIFT) > + > +#define DEFAULT_CODEL_LIMIT 1000 > + > +/* Per-queue state (codel_queue_t instance variables) */ > + > +struct codel_sched_data { > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 minbytes; > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 interval; > + =A0 =A0 =A0 codel_time_t =A0 =A0target; > + > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 count; /* packets dropped since= entering drop state */ > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 drop_count; > + =A0 =A0 =A0 bool =A0 =A0 =A0 =A0 =A0 =A0dropping; > + =A0 =A0 =A0 /* time to declare above q->target (0 if below)*/ > + =A0 =A0 =A0 codel_time_t =A0 =A0first_above_time; > + =A0 =A0 =A0 codel_time_t =A0 =A0drop_next; /* time to drop next packet = */ > + > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 state1; > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 state2; > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 state3; > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 states; > + =A0 =A0 =A0 u32 =A0 =A0 =A0 =A0 =A0 =A0 drop_overlimit; > +}; > + > +struct codel_skb_cb { > + =A0 =A0 =A0 codel_time_t enqueue_time; > +}; > + > + > +/* > + * return interval/sqrt(x) with good precision > + */ > +static u32 calc(u32 _interval, u32 _x) > +{ > + =A0 =A0 =A0 u64 interval =3D _interval; > + =A0 =A0 =A0 unsigned long x =3D _x; > + > + =A0 =A0 =A0 /* scale operands for max precision > + =A0 =A0 =A0 =A0* On 64bit arches, we can prescale x by 32bits > + =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 if (BITS_PER_LONG =3D=3D 64) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 x <<=3D 32; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 interval <<=3D 16; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 while (x < (1UL << (BITS_PER_LONG - 2))) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 x <<=3D 2; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 interval <<=3D 1; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 do_div(interval, int_sqrt(x)); > + =A0 =A0 =A0 return (u32)interval; > +} > + > +static struct codel_skb_cb *get_codel_cb(const struct sk_buff *skb) > +{ > + =A0 =A0 =A0 qdisc_cb_private_validate(skb, sizeof(struct codel_skb_cb))= ; > + =A0 =A0 =A0 return (struct codel_skb_cb *)qdisc_skb_cb(skb)->data; > +} > + > +static codel_time_t get_enqueue_time(const struct sk_buff *skb) > +{ > + =A0 =A0 =A0 return get_codel_cb(skb)->enqueue_time; > +} > + > +static void set_enqueue_time(struct sk_buff *skb) > +{ > + =A0 =A0 =A0 get_codel_cb(skb)->enqueue_time =3D codel_get_time(); > +} > + > +static codel_time_t control_law(const struct codel_sched_data *q, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 codel_time_= t t) > +{ > + =A0 =A0 =A0 return t + calc(q->interval, q->count); > +} > + > +static bool should_drop(struct sk_buff *skb, struct Qdisc *sch, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 codel_time_t now) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + =A0 =A0 =A0 codel_time_t sojourn_time; > + =A0 =A0 =A0 bool drop; > + > + =A0 =A0 =A0 if (!skb) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->first_above_time =3D 0; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return false; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 sch->qstats.backlog -=3D qdisc_pkt_len(skb); > + =A0 =A0 =A0 sojourn_time =3D now - get_enqueue_time(skb); > + > + =A0 =A0 =A0 if (codel_time_before(sojourn_time, q->target) || > + =A0 =A0 =A0 =A0 =A0 sch->qstats.backlog < q->minbytes) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* went below so we'll stay below for at le= ast q->interval */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->first_above_time =3D 0; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return false; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 drop =3D false; > + =A0 =A0 =A0 if (q->first_above_time =3D=3D 0) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* just went above from below. If we stay a= bove > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* for at least q->interval we'll say it'= s ok to drop > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->first_above_time =3D now + q->interval; > + =A0 =A0 =A0 } else if (codel_time_after(now, q->first_above_time)) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 drop =3D true; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->state1++; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 return drop; > +} > + > +static void codel_drop(struct Qdisc *sch, struct sk_buff *skb) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + > + =A0 =A0 =A0 qdisc_drop(skb, sch); > + =A0 =A0 =A0 q->drop_count++; > +} > + > +/* > +* if min went above target close to when we last went below, > +* assume that some drop rate near that controlled the queue on the > +* last cycle is a good starting point to control it now. > +* > +* Since there is debate about it right now, we try a few > +* different methods. > +*/ > + > +static u32 count_rescale(struct codel_sched_data *q, codel_time_t now) { > + =A0 =A0 =A0 u32 c =3D 1; > + > + =A0 =A0 =A0 if (q->count < 2) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return 1; > + > + =A0 =A0 =A0 if (codel_time_after(now - q->drop_next, 16 * q->interval))= { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 switch(decrease_method) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 case 3: /* Taht 1 (not yet = what I have in mind) */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 c =3D q->co= unt - 1; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 case 2: /* Dumazet 2 */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 c =3D q->co= unt >> 1; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 case 1: /* Dumazet 1 */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 c =3D min(q= ->count - 1, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 q->count - (q->count >> 4)); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 case 0: /* Codel Paper Defa= ult */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 default: > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 c =3D q->co= unt - 1; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 c =3D max(1U, c); > + =A0 =A0 =A0 } > + =A0 =A0 =A0 return (u32) c; > +} > + > +static struct sk_buff *codel_dequeue(struct Qdisc *sch) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + =A0 =A0 =A0 struct sk_buff *skb =3D __skb_dequeue(&sch->q); > + =A0 =A0 =A0 codel_time_t now; > + =A0 =A0 =A0 bool drop; > + > + =A0 =A0 =A0 if (!skb) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->dropping =3D false; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return skb; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 now =3D codel_get_time(); > + =A0 =A0 =A0 drop =3D should_drop(skb, sch, now); > + =A0 =A0 =A0 if (q->dropping) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!drop) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* sojourn time below targe= t - leave dropping state */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->dropping =3D false; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } else if (codel_time_after_eq(now, q->drop= _next)) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->state2++; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* It's time for the next d= rop. Drop the current > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* packet and dequeue the= next. The dequeue might > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* take us out of droppin= g state. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* If not, schedule the n= ext drop. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* A large backlog might = result in drop rates so high > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* that the next drop sho= uld happen now, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* hence the while loop. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if(gentle) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->count++; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 while (q->dropping && > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0codel_time_a= fter_eq(now, q->drop_next)) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 codel_drop(= sch, skb); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if(!gentle) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 q->count++; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 skb =3D __s= kb_dequeue(&sch->q); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (!should= _drop(skb, sch, now)) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 /* leave dropping state */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 q->dropping =3D false; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } else { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 /* and schedule the next drop */ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 q->drop_next =3D > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 control_law(q, q->drop_next); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 } else if (drop && > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(codel_time_before(now - q->drop_nex= t, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= 16 * q->interval) || > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 codel_time_after_eq(now - q->first_= above_time, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 2 * q->interval))) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 codel_drop(sch, skb); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 skb =3D __skb_dequeue(&sch->q); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 drop =3D should_drop(skb, sch, now); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->dropping =3D true; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->state3++; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* if min went above target close to when= we last went below, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* assume that the drop rate that control= led the queue on the > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* last cycle is a good starting point to= control it now. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0* Since there is debate about it right n= ow, punt. > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->count =3D count_rescale(q, now); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->drop_next =3D control_law(q, now); > + =A0 =A0 =A0 } > + =A0 =A0 =A0 q->states++; > + =A0 =A0 =A0 /* We cant call qdisc_tree_decrease_qlen() if our qlen is 0= , > + =A0 =A0 =A0 =A0* or HTB crashes. Defer it for next round. > + =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 if (q->drop_count && sch->q.qlen) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 qdisc_tree_decrease_qlen(sch, q->drop_count= ); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->drop_count =3D 0; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 if (skb) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 qdisc_bstats_update(sch, skb); > + =A0 =A0 =A0 return skb; > +} > + > +static int codel_enqueue(struct sk_buff *skb, struct Qdisc *sch) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q; > + > + =A0 =A0 =A0 if (likely(skb_queue_len(&sch->q) < sch->limit)) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 set_enqueue_time(skb); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return qdisc_enqueue_tail(skb, sch); > + =A0 =A0 =A0 } > + =A0 =A0 =A0 q =3D qdisc_priv(sch); > + =A0 =A0 =A0 q->drop_overlimit++; > + =A0 =A0 =A0 return qdisc_drop(skb, sch); > +} > + > +static const struct nla_policy codel_policy[TCA_CODEL_MAX + 1] =3D { > + =A0 =A0 =A0 [TCA_CODEL_TARGET] =A0 =A0 =A0=3D { .type =3D NLA_U32 }, > + =A0 =A0 =A0 [TCA_CODEL_LIMIT] =A0 =A0 =A0 =3D { .type =3D NLA_U32 }, > + =A0 =A0 =A0 [TCA_CODEL_MINBYTES] =A0 =A0=3D { .type =3D NLA_U32 }, > + =A0 =A0 =A0 [TCA_CODEL_INTERVAL] =A0 =A0=3D { .type =3D NLA_U32 }, > +}; > + > +static int codel_change(struct Qdisc *sch, struct nlattr *opt) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + =A0 =A0 =A0 struct nlattr *tb[TCA_CODEL_MAX + 1]; > + =A0 =A0 =A0 unsigned int qlen; > + =A0 =A0 =A0 int err; > + > + =A0 =A0 =A0 if (!opt) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return -EINVAL; > + > + =A0 =A0 =A0 err =3D nla_parse_nested(tb, TCA_CODEL_MAX, opt, codel_poli= cy); > + =A0 =A0 =A0 if (err < 0) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 return err; > + > + =A0 =A0 =A0 sch_tree_lock(sch); > + =A0 =A0 =A0 if (tb[TCA_CODEL_TARGET]) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 u32 target =3D nla_get_u32(tb[TCA_CODEL_TAR= GET]); > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->target =3D ((u64)target * NSEC_PER_USEC)= >> CODEL_SHIFT; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 if (tb[TCA_CODEL_INTERVAL]) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 u32 interval =3D nla_get_u32(tb[TCA_CODEL_I= NTERVAL]); > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->interval =3D ((u64)interval * NSEC_PER_U= SEC) >> CODEL_SHIFT; > + =A0 =A0 =A0 } > + =A0 =A0 =A0 if (tb[TCA_CODEL_LIMIT]) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 sch->limit =3D nla_get_u32(tb[TCA_CODEL_LIM= IT]); > + > + =A0 =A0 =A0 if (tb[TCA_CODEL_MINBYTES]) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 q->minbytes =3D nla_get_u32(tb[TCA_CODEL_MI= NBYTES]); > + > + =A0 =A0 =A0 qlen =3D sch->q.qlen; > + =A0 =A0 =A0 while (sch->q.qlen > sch->limit) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 struct sk_buff *skb =3D __skb_dequeue(&sch-= >q); > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 sch->qstats.backlog -=3D qdisc_pkt_len(skb)= ; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 qdisc_drop(skb, sch); > + =A0 =A0 =A0 } > + =A0 =A0 =A0 qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen); > + > + =A0 =A0 =A0 q->drop_next =3D q->first_above_time =3D 0; > + =A0 =A0 =A0 q->dropping =3D false; > + =A0 =A0 =A0 sch_tree_unlock(sch); > + =A0 =A0 =A0 return 0; > +} > + > +static int codel_init(struct Qdisc *sch, struct nlattr *opt) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + > + =A0 =A0 =A0 q->target =3D MS2TIME(5); > + =A0 =A0 =A0 /* It should be possible to run with no limit, > + =A0 =A0 =A0 =A0* with infinite memory > + =A0 =A0 =A0 =A0*/ > + =A0 =A0 =A0 sch->limit =3D DEFAULT_CODEL_LIMIT; > + =A0 =A0 =A0 q->minbytes =3D psched_mtu(qdisc_dev(sch)); > + =A0 =A0 =A0 q->interval =3D MS2TIME(100); > + =A0 =A0 =A0 q->drop_next =3D q->first_above_time =3D 0; > + =A0 =A0 =A0 q->dropping =3D false; /* exit dropping state */ > + =A0 =A0 =A0 q->count =3D 1; > + =A0 =A0 =A0 if (opt) { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 int err =3D codel_change(sch, opt); > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (err) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 return err; > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0 if (sch->limit >=3D 1) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 sch->flags |=3D TCQ_F_CAN_BYPASS; > + =A0 =A0 =A0 else > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 sch->flags &=3D ~TCQ_F_CAN_BYPASS; > + > + =A0 =A0 =A0 return 0; > +} > + > +static u32 codel_time_to_us(codel_time_t val) > +{ > + =A0 =A0 =A0 u64 valns =3D ((u64)val << CODEL_SHIFT); > + > + =A0 =A0 =A0 do_div(valns, NSEC_PER_USEC); > + =A0 =A0 =A0 return (u32)valns; > +} > + > +static int codel_dump(struct Qdisc *sch, struct sk_buff *skb) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + =A0 =A0 =A0 struct nlattr *opts; > + > + =A0 =A0 =A0 opts =3D nla_nest_start(skb, TCA_OPTIONS); > + =A0 =A0 =A0 if (opts =3D=3D NULL) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto nla_put_failure; > + =A0 =A0 =A0 if (nla_put_u32(skb, TCA_CODEL_TARGET, codel_time_to_us(q->= target)) || > + =A0 =A0 =A0 =A0 =A0 nla_put_u32(skb, TCA_CODEL_LIMIT, sch->limit) || > + =A0 =A0 =A0 =A0 =A0 nla_put_u32(skb, TCA_CODEL_INTERVAL, codel_time_to_= us(q->interval)) || > + =A0 =A0 =A0 =A0 =A0 nla_put_u32(skb, TCA_CODEL_MINBYTES, q->minbytes)) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 goto nla_put_failure; > + > + =A0 =A0 =A0 return nla_nest_end(skb, opts); > + > +nla_put_failure: > + =A0 =A0 =A0 nla_nest_cancel(skb, opts); > + =A0 =A0 =A0 return -1; > +} > + > +static int codel_dump_stats(struct Qdisc *sch, struct gnet_dump *d) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + =A0 =A0 =A0 struct sk_buff *skb =3D skb_peek(&sch->q); > + =A0 =A0 =A0 codel_time_t now =3D codel_get_time(); > + =A0 =A0 =A0 struct tc_codel_xstats st =3D { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .count =A0=3D q->count, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .state1 =3D q->state1, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .state2 =3D q->state2, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .state3 =3D q->state3, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .states =3D q->states, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .drop_overlimit =3D q->drop_overlimit, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .delay =3D skb ? now - get_enqueue_time(skb= ) : 0, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .drop_next =3D q->drop_next ? q->drop_next = - now : 0, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 .dropping =3D q->dropping, > + =A0 =A0 =A0 }; > + > + =A0 =A0 =A0 return gnet_stats_copy_app(d, &st, sizeof(st)); > +} > + > +static void codel_reset(struct Qdisc *sch) > +{ > + =A0 =A0 =A0 struct codel_sched_data *q =3D qdisc_priv(sch); > + > + =A0 =A0 =A0 qdisc_reset_queue(sch); > + =A0 =A0 =A0 sch->q.qlen =3D 0; > + =A0 =A0 =A0 q->dropping =3D false; > + =A0 =A0 =A0 q->count =3D 1; > +} > + > +static struct Qdisc_ops codel_qdisc_ops __read_mostly =3D { > + =A0 =A0 =A0 .id =A0 =A0 =A0 =A0 =A0 =A0 =3D =A0 =A0 =A0 "codel", > + =A0 =A0 =A0 .priv_size =A0 =A0 =A0=3D =A0 =A0 =A0 sizeof(struct codel_s= ched_data), > + > + =A0 =A0 =A0 .enqueue =A0 =A0 =A0 =A0=3D =A0 =A0 =A0 codel_enqueue, > + =A0 =A0 =A0 .dequeue =A0 =A0 =A0 =A0=3D =A0 =A0 =A0 codel_dequeue, > + =A0 =A0 =A0 .peek =A0 =A0 =A0 =A0 =A0 =3D =A0 =A0 =A0 qdisc_peek_dequeu= ed, > + =A0 =A0 =A0 .init =A0 =A0 =A0 =A0 =A0 =3D =A0 =A0 =A0 codel_init, > + =A0 =A0 =A0 .reset =A0 =A0 =A0 =A0 =A0=3D =A0 =A0 =A0 codel_reset, > + =A0 =A0 =A0 .change =A0 =A0 =A0 =A0 =3D =A0 =A0 =A0 codel_change, > + =A0 =A0 =A0 .dump =A0 =A0 =A0 =A0 =A0 =3D =A0 =A0 =A0 codel_dump, > + =A0 =A0 =A0 .dump_stats =A0 =A0 =3D =A0 =A0 =A0 codel_dump_stats, > + =A0 =A0 =A0 .owner =A0 =A0 =A0 =A0 =A0=3D =A0 =A0 =A0 THIS_MODULE, > +}; > + > +static int __init codel_module_init(void) > +{ > + =A0 =A0 =A0 =A0return register_qdisc(&codel_qdisc_ops); > +} > +static void __exit codel_module_exit(void) > +{ > + =A0 =A0 =A0 =A0unregister_qdisc(&codel_qdisc_ops); > +} > +module_init(codel_module_init) > +module_exit(codel_module_exit) > + > +MODULE_DESCRIPTION("Controlled Delay queue discipline"); > +MODULE_AUTHOR("Dave Taht"); > +MODULE_AUTHOR("Eric Dumazet"); > +MODULE_LICENSE("GPL"); > -- > 1.7.9.5 > > > _______________________________________________ > Codel mailing list > Codel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/codel > --=20 Dave T=E4ht SKYPE: davetaht US Tel: 1-239-829-5608 http://www.bufferbloat.net