From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com [IPv6:2607:f8b0:4864:20::830]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id C3EDF3CB37 for ; Wed, 13 Feb 2019 13:31:46 -0500 (EST) Received: by mail-qt1-x830.google.com with SMTP id 2so3786285qtb.5 for ; Wed, 13 Feb 2019 10:31:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b7H2eJGqsncqOOx3gN0QwdxjSJoIkcD3npHbyGuKxYI=; b=nG8cS8GUX8sYf5ZBJhpGPEObyX5jRoEtrNXERWMeo/Q+qjX0ZuDqOJdDDFjF4ZZ1Fd YoWUQNETihUsJn+o3JSTl20PWPRpsrmOeMIBgCW+fzASpD7TCy8MoniWkNGtiNZDyIQL hh3lVWX9N3eshbkfZHSYSmEvSrkSFHc917Qc7O01IjHsYxNBJJLVkQP+8km7jQdl8z2b CpzAWjZaYlpHJb4Yymf8CxMCIUgOVABQHsDd377fwI6TcPXIHyBVBjsyg3RnxvqQJrUR A2DmcN/1U71d1UldtIhcPWN5+gvdXaJJIwd2Zh/QVh3oQdxPmis0uT0Mns6rJv8B5drv hi+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b7H2eJGqsncqOOx3gN0QwdxjSJoIkcD3npHbyGuKxYI=; b=ayvoy555ypP6VumMDxQDtHMI7IZQ76seuCZA1kEmEqUbhk1SVrJ0AjxfoLbUTXBTV9 EF7eza89FAwweRsatUUcRwk6evTwVH/rrMiGqOz/13nClwDwiGyJAvX2nD1Xymq8gxiO JPToq+85BztXBfI0PYSLpxh00Cw9WGi1vx3dMiZQpJ4C6LHCUFf/mb/VvuFsfRdr3ij4 wb9wn3j2Bb4rQCc9bjNXkn5BO2TSZfNEQJfOgTE5fcJhE/eJLeR8nebByh2mCoi1Ismf 6ZpYpO4+SzGagpAf/ukIyiDJZeGbaLT9EKaowsAKNrV0cvBEThq1rE5b4xZBMyQHQYx+ ihZQ== X-Gm-Message-State: AHQUAuad0ffCQgOTYg0pbSn9OFyTUkJQ8zn+v0PNoh4jDtc+8MyMbD4w BDmAB/O/sDdfq8BzVoK+4KI= X-Google-Smtp-Source: AHgI3IbgR34bWU4oaOiBrrm4EWzLoGO8qLjZGpyyhW8U9MGU2cLLrmGcxgTXVV/lyuU0MLsN/5zOqA== X-Received: by 2002:a0c:c14e:: with SMTP id i14mr1532284qvh.84.1550082706077; Wed, 13 Feb 2019 10:31:46 -0800 (PST) Received: from x3200.lan ([2601:152:4302:ae1f::16e]) by smtp.gmail.com with ESMTPSA id s5sm15186146qkb.50.2019.02.13.10.31.44 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 13 Feb 2019 10:31:45 -0800 (PST) From: George Amanakis To: kevin@darbyshire-bryant.me.uk Cc: cake@lists.bufferbloat.net, George Amanakis Date: Wed, 13 Feb 2019 13:31:32 -0500 Message-Id: <20190213183132.11019-1-gamanakis@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [Cake] progress? dual-src/dsthost unfairness X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Feb 2019 18:31:46 -0000 I recently rewrote the patch (out-of-tree cake) so as to keep track of the bulk/sparse flow-count per host. I have been testing it for about a month on a WRT1900ACS and it runs fine. Would love to hear if Jonathan or anybody else has thought of implementing something different. Best, George --- sch_cake.c | 120 +++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 88 insertions(+), 32 deletions(-) diff --git a/sch_cake.c b/sch_cake.c index d434ae0..10364ec 100644 --- a/sch_cake.c +++ b/sch_cake.c @@ -146,8 +146,10 @@ struct cake_flow { struct cake_host { u32 srchost_tag; u32 dsthost_tag; - u16 srchost_refcnt; - u16 dsthost_refcnt; + u16 srchost_bulk_flow_count; + u16 srchost_sparse_flow_count; + u16 dsthost_bulk_flow_count; + u16 dsthost_sparse_flow_count; }; struct cake_heap_entry { @@ -844,8 +846,6 @@ skip_hash: * queue, accept the collision, update the host tags. */ q->way_collisions++; - q->hosts[q->flows[reduced_hash].srchost].srchost_refcnt--; - q->hosts[q->flows[reduced_hash].dsthost].dsthost_refcnt--; allocate_src = cake_dsrc(flow_mode); allocate_dst = cake_ddst(flow_mode); found: @@ -865,13 +865,13 @@ found: } for (i = 0; i < CAKE_SET_WAYS; i++, k = (k + 1) % CAKE_SET_WAYS) { - if (!q->hosts[outer_hash + k].srchost_refcnt) + if (!q->hosts[outer_hash + k].srchost_bulk_flow_count && + !q->hosts[outer_hash + k].srchost_sparse_flow_count) break; } q->hosts[outer_hash + k].srchost_tag = srchost_hash; found_src: srchost_idx = outer_hash + k; - q->hosts[srchost_idx].srchost_refcnt++; q->flows[reduced_hash].srchost = srchost_idx; } @@ -887,13 +887,13 @@ found_src: } for (i = 0; i < CAKE_SET_WAYS; i++, k = (k + 1) % CAKE_SET_WAYS) { - if (!q->hosts[outer_hash + k].dsthost_refcnt) + if (!q->hosts[outer_hash + k].dsthost_bulk_flow_count && + !q->hosts[outer_hash + k].dsthost_sparse_flow_count) break; } q->hosts[outer_hash + k].dsthost_tag = dsthost_hash; found_dst: dsthost_idx = outer_hash + k; - q->hosts[dsthost_idx].dsthost_refcnt++; q->flows[reduced_hash].dsthost = dsthost_idx; } } @@ -1912,21 +1912,39 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch, flow->set = CAKE_SET_SPARSE; b->sparse_flow_count++; - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_refcnt); + if (cake_dsrc(q->flow_mode)) { + srchost->srchost_sparse_flow_count++; + host_load = max(host_load, srchost->srchost_sparse_flow_count); + } - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_refcnt); + if (cake_ddst(q->flow_mode)) { + dsthost->dsthost_sparse_flow_count++; + host_load = max(host_load, dsthost->dsthost_sparse_flow_count); + } flow->deficit = (b->flow_quantum * quantum_div[host_load]) >> 16; } else if (flow->set == CAKE_SET_SPARSE_WAIT) { + struct cake_host *srchost = &b->hosts[flow->srchost]; + struct cake_host *dsthost = &b->hosts[flow->dsthost]; + /* this flow was empty, accounted as a sparse flow, but actually * in the bulk rotation. */ flow->set = CAKE_SET_BULK; b->sparse_flow_count--; b->bulk_flow_count++; + + if (cake_dsrc(q->flow_mode)) { + srchost->srchost_sparse_flow_count--; + srchost->srchost_bulk_flow_count++; + } + + if (cake_ddst(q->flow_mode)) { + dsthost->dsthost_sparse_flow_count--; + dsthost->dsthost_bulk_flow_count++; + } + } if (q->buffer_used > q->buffer_max_used) @@ -2097,23 +2115,8 @@ retry: dsthost = &b->hosts[flow->dsthost]; host_load = 1; - if (cake_dsrc(q->flow_mode)) - host_load = max(host_load, srchost->srchost_refcnt); - - if (cake_ddst(q->flow_mode)) - host_load = max(host_load, dsthost->dsthost_refcnt); - - WARN_ON(host_load > CAKE_QUEUES); - /* flow isolation (DRR++) */ if (flow->deficit <= 0) { - /* The shifted prandom_u32() is a way to apply dithering to - * avoid accumulating roundoff errors - */ - flow->deficit += (b->flow_quantum * quantum_div[host_load] + - (prandom_u32() >> 16)) >> 16; - list_move_tail(&flow->flowchain, &b->old_flows); - /* Keep all flows with deficits out of the sparse and decaying * rotations. No non-empty flow can go into the decaying * rotation, so they can't get deficits @@ -2122,6 +2125,17 @@ retry: if (flow->head) { b->sparse_flow_count--; b->bulk_flow_count++; + + if (cake_dsrc(q->flow_mode)) { + srchost->srchost_sparse_flow_count--; + srchost->srchost_bulk_flow_count++; + } + + if (cake_ddst(q->flow_mode)) { + dsthost->dsthost_sparse_flow_count--; + dsthost->dsthost_bulk_flow_count++; + } + flow->set = CAKE_SET_BULK; } else { /* we've moved it to the bulk rotation for @@ -2131,6 +2145,22 @@ retry: flow->set = CAKE_SET_SPARSE_WAIT; } } + + if (cake_dsrc(q->flow_mode)) + host_load = max(host_load, srchost->srchost_bulk_flow_count); + + if (cake_ddst(q->flow_mode)) + host_load = max(host_load, dsthost->dsthost_bulk_flow_count); + + WARN_ON(host_load > CAKE_QUEUES); + + /* The shifted prandom_u32() is a way to apply dithering to + * avoid accumulating roundoff errors + */ + flow->deficit += (b->flow_quantum * quantum_div[host_load] + + (prandom_u32() >> 16)) >> 16; + list_move_tail(&flow->flowchain, &b->old_flows); + goto retry; } @@ -2151,10 +2181,24 @@ retry: &b->decaying_flows); if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; + + if (cake_dsrc(q->flow_mode)) + srchost->srchost_bulk_flow_count--; + + if (cake_ddst(q->flow_mode)) + dsthost->dsthost_bulk_flow_count--; + b->decaying_flow_count++; } else if (flow->set == CAKE_SET_SPARSE || flow->set == CAKE_SET_SPARSE_WAIT) { b->sparse_flow_count--; + + if (cake_dsrc(q->flow_mode)) + srchost->srchost_sparse_flow_count--; + + if (cake_ddst(q->flow_mode)) + dsthost->dsthost_sparse_flow_count--; + b->decaying_flow_count++; } flow->set = CAKE_SET_DECAYING; @@ -2162,16 +2206,28 @@ retry: /* remove empty queue from the flowchain */ list_del_init(&flow->flowchain); if (flow->set == CAKE_SET_SPARSE || - flow->set == CAKE_SET_SPARSE_WAIT) + flow->set == CAKE_SET_SPARSE_WAIT) { b->sparse_flow_count--; - else if (flow->set == CAKE_SET_BULK) + + if (cake_dsrc(q->flow_mode)) + srchost->srchost_sparse_flow_count--; + + if (cake_ddst(q->flow_mode)) + dsthost->dsthost_sparse_flow_count--; + + } else if (flow->set == CAKE_SET_BULK) { b->bulk_flow_count--; - else + + if (cake_dsrc(q->flow_mode)) + srchost->srchost_bulk_flow_count--; + + if (cake_ddst(q->flow_mode)) + dsthost->dsthost_bulk_flow_count--; + + } else b->decaying_flow_count--; flow->set = CAKE_SET_NONE; - srchost->srchost_refcnt--; - dsthost->dsthost_refcnt--; } goto begin; } -- 2.20.1