From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by huchra.bufferbloat.net (Postfix, from userid 1000) id D239921F1E3; Mon, 21 Oct 2013 18:43:10 -0700 (PDT) Date: Mon, 21 Oct 2013 18:43:10 -0700 From: Dave Taht To: Eric Dumazet Message-ID: <20131022014310.GA5076@lists.bufferbloat.net> References: <1382404797-17239-1-git-send-email-dave.taht@bufferbloat.net> <1382404797-17239-2-git-send-email-dave.taht@bufferbloat.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Cc: netdev , codel@lists.bufferbloat.net Subject: Re: [Codel] [PATCH 1/2] fq_codel: keep dropped statistic around X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Oct 2013 01:43:11 -0000 On Mon, Oct 21, 2013 at 06:27:11PM -0700, Eric Dumazet wrote: > On Oct 21, 2013 6:20 PM, "Dave Taht" wrote: > > > > Having more accurate dropped information in this qdisc is useful. > > > > Signed-off-by: Dave Taht > > --- > > net/sched/sch_fq_codel.c | 1 - > > 1 file changed, 1 deletion(-) > > > > diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c > > index 5578628..437bc95 100644 > > --- a/net/sched/sch_fq_codel.c > > +++ b/net/sched/sch_fq_codel.c > > @@ -193,7 +193,6 @@ static int fq_codel_enqueue(struct sk_buff *skb, > struct Qdisc *sch) > > list_add_tail(&flow->flowchain, &q->new_flows); > > q->new_flow_count++; > > flow->deficit = q->quantum; > > - flow->dropped = 0; > > } > > if (++sch->q.qlen <= sch->limit) > > return NET_XMIT_SUCCESS; > > -- > > 1.7.9.5 > I am travelling to Edinburgh, so will be short. Wish I could have made it. > Since fqcodel recycles a slot, we need to clear this counter. I prefer to think of it as a per "slot dropped counter" and to have it retain the total number of drops in that slot since qdisc initialization. > We do no know > if slot is reused by previous flow or a new flow hashing to same bucket. There could also in this case be several flows hashing to the same bucket and dropping packets. In most cases with the current zero-ing of "drop", polling "tc -s class" yields an unrevealing drop statistic of "0" for many workloads against multiple streams at lower bandwidths. with it not getting zeroed, as per this patch, clear patterns show over many seconds as queues empty, get filled by bursts, and get drops. This patch has been in openwrt and cerowrt since feburary.