From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qe0-f52.google.com (mail-qe0-f52.google.com [209.85.128.52]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 0D83821F1BE for ; Tue, 4 Jun 2013 08:55:23 -0700 (PDT) Received: by mail-qe0-f52.google.com with SMTP id i11so290124qej.39 for ; Tue, 04 Jun 2013 08:55:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:subject:from:to:cc:date:in-reply-to:references :content-type:x-mailer:content-transfer-encoding:mime-version; bh=kevLAmfBP+QzjmVQZ9JKPFcpiEyYlyHVtRFlgOgzePk=; b=PPsQcOtQo0+mqXISOi//zQFrhBDB0/YWZuocydjtSscq7TQ/Q7juY31F1wv/xAvvWY jmK3wf2tkHLTDzO6Lx2UIMwCn8H0CQQrpV8qmY7FunKb89Ex45mRC73Wi7FvVyPWLxI7 SI89F/lSEmvBv7TLoxQguik8v5ENEosyWmHOdMWwG7BKN71DjVxBnIA2ExIn52M08jCH O17k+BGXcVxFYJzi0xGCuM/O8OHcxliTuH/swwmIAwaKZWARTq+4pGgXL7w+Ln+9x+Yv oxXNB+C0dYHvmArmXIYQ9H/vPViSseplkD+S7qU68+b5EOqrc3hQ1yTxmWieuEQKBoM2 OiXg== X-Received: by 10.229.149.14 with SMTP id r14mr6637214qcv.59.1370361323052; Tue, 04 Jun 2013 08:55:23 -0700 (PDT) Received: from [172.26.49.73] ([172.26.49.73]) by mx.google.com with ESMTPSA id c7sm6737532qaj.5.2013.06.04.08.55.18 for (version=SSLv3 cipher=RC4-SHA bits=128/128); Tue, 04 Jun 2013 08:55:22 -0700 (PDT) Message-ID: <1370361306.24311.214.camel@edumazet-glaptop> From: Eric Dumazet To: Jesper Dangaard Brouer Date: Tue, 04 Jun 2013 08:55:06 -0700 In-Reply-To: <1370359133.24311.208.camel@edumazet-glaptop> References: <20130529151330.22c5c89e@redhat.com> <20130604141342.00c8eb9f@redhat.com> <1370359133.24311.208.camel@edumazet-glaptop> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3-0ubuntu6 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 Cc: Toke =?ISO-8859-1?Q?H=F8iland-J=F8rgensen?= , Mike Frysinger , Jiri Pirko , netdev@vger.kernel.org, Jiri Benc , Patrick McHardy , Steven Barth , bloat@lists.bufferbloat.net, David Miller , Jussi Kivilinna , Felix Fietkau , Michal Soltys Subject: Re: [Bloat] Bad shaping at low rates, after commit 56b765b79 (htb: improved accuracy at high rates) X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Jun 2013 15:55:24 -0000 On Tue, 2013-06-04 at 08:18 -0700, Eric Dumazet wrote: > I have a good idea of what's going on for htb at low rates, I am testing > a fix, thanks for the report ! > Yes, we need to convert whole thing to use ns units, instead of a mix of 64ns and 1ns units. Please test the following patch : diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index 79b1876..6c53341 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -109,7 +109,7 @@ struct htb_class { } un; struct rb_node node[TC_HTB_NUMPRIO]; /* node for self or feed tree */ struct rb_node pq_node; /* node for event queue */ - psched_time_t pq_key; + s64 pq_key; int prio_activity; /* for which prios are we active */ enum htb_cmode cmode; /* current mode of the class */ @@ -124,7 +124,7 @@ struct htb_class { s64 buffer, cbuffer; /* token bucket depth/rate */ psched_tdiff_t mbuffer; /* max wait time */ s64 tokens, ctokens; /* current number of tokens */ - psched_time_t t_c; /* checkpoint time */ + s64 t_c; /* checkpoint time */ }; struct htb_sched { @@ -141,7 +141,7 @@ struct htb_sched { struct rb_root wait_pq[TC_HTB_MAXDEPTH]; /* time of nearest event per level (row) */ - psched_time_t near_ev_cache[TC_HTB_MAXDEPTH]; + s64 near_ev_cache[TC_HTB_MAXDEPTH]; int defcls; /* class where unclassified flows go to */ @@ -149,7 +149,7 @@ struct htb_sched { struct tcf_proto *filter_list; int rate2quantum; /* quant = rate / rate2quantum */ - psched_time_t now; /* cached dequeue time */ + s64 now; /* cached dequeue time */ struct qdisc_watchdog watchdog; /* non shaped skbs; let them go directly thru */ @@ -664,8 +664,8 @@ static void htb_charge_class(struct htb_sched *q, struct htb_class *cl, * next pending event (0 for no event in pq, q->now for too many events). * Note: Applied are events whose have cl->pq_key <= q->now. */ -static psched_time_t htb_do_events(struct htb_sched *q, int level, - unsigned long start) +static s64 htb_do_events(struct htb_sched *q, int level, + unsigned long start) { /* don't run for longer than 2 jiffies; 2 is used instead of * 1 to simplify things when jiffy is going to be incremented @@ -857,7 +857,7 @@ static struct sk_buff *htb_dequeue(struct Qdisc *sch) struct sk_buff *skb; struct htb_sched *q = qdisc_priv(sch); int level; - psched_time_t next_event; + s64 next_event; unsigned long start_at; /* try to dequeue direct packets as high prio (!) to minimize cpu work */ @@ -880,7 +880,7 @@ ok: for (level = 0; level < TC_HTB_MAXDEPTH; level++) { /* common case optimization - skip event handler quickly */ int m; - psched_time_t event; + s64 event; if (q->now >= q->near_ev_cache[level]) { event = htb_do_events(q, level, start_at); @@ -1200,7 +1200,7 @@ static void htb_parent_to_leaf(struct htb_sched *q, struct htb_class *cl, parent->un.leaf.q = new_q ? new_q : &noop_qdisc; parent->tokens = parent->buffer; parent->ctokens = parent->cbuffer; - parent->t_c = psched_get_time(); + parent->t_c = ktime_to_ns(ktime_get()); parent->cmode = HTB_CAN_SEND; } @@ -1417,8 +1417,8 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, /* set class to be in HTB_CAN_SEND state */ cl->tokens = PSCHED_TICKS2NS(hopt->buffer); cl->ctokens = PSCHED_TICKS2NS(hopt->cbuffer); - cl->mbuffer = 60 * PSCHED_TICKS_PER_SEC; /* 1min */ - cl->t_c = psched_get_time(); + cl->mbuffer = 60ULL * NSEC_PER_SEC; /* 1min */ + cl->t_c = ktime_to_ns(ktime_get()); cl->cmode = HTB_CAN_SEND; /* attach to the hash list and parent's family */