From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.toke.dk (mail.toke.dk [52.28.52.200]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 4BA753B29E for ; Sat, 28 Jul 2018 04:56:06 -0400 (EDT) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=toke.dk; s=20161023; t=1532768164; bh=fN5VdH7qqpeZxhhkR40Eiqv83xQHGivq15CY45PYaPM=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=MceHUPikyHcikPQ6AcSaeNXTWQDjfnSj+LRqE8BmsgzRo+f0LLiG6hHpenFGGLwKK uab70d6oTO8AxRktal1XXPQlgXKDGEzMdiuGC5p93djBZzOnAI4vs5iWDOiLFURCbe aqx19Hnywl7hv5VJAm47dUgWMMSNHaj6QbEChp52gLDdh4YU5OsiXQAqLEaBSPXCSa 9dqHFLEjUNLtQv66hTYnl6eK2Vqijh9o/xoZ+/wplZdRdWGBMhYWFvFfl6OlM7uS+D mP4z90MF0Jgjxv4xD71nVPkfX02r/vQlMzrfip7nPf0ynXj8Vz8cRQyONUtJgucpL/ uTLM4x1ld/cQg== To: Jonathan Morton , Dan Siemon Cc: Dave Taht , Cake List In-Reply-To: References: <1357421162.31089.1531812291583@webmail.strato.de> <1c323544b3076c0ab31b887d6113f25f572e41ae.camel@coverfire.com> <87woth28rw.fsf@toke.dk> <87tvol1z6h.fsf@toke.dk> Date: Sat, 28 Jul 2018 10:56:03 +0200 X-Clacks-Overhead: GNU Terry Pratchett Message-ID: <87wotfzql8.fsf@toke.dk> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Cake] =?utf-8?q?Using_cake_to_shape_1000=E2=80=99s_of_users=2E?= X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 28 Jul 2018 08:56:06 -0000 Jonathan Morton writes: > Yes, eBPF does seem to be a good fit for that. > > So in summary, the logical flow of a packet should be: > > 1: Map dst or src IP to subscriber (eBPF). > 2: Map subscriber to speed/overhead tier (eBPF). > 3: (optional) Classify Diffserv (???). > 4: Enqueue per flow, handle global queue overflow (rare!) by dropping > from head of longest queue (like Cake). Note that with the existing tc classifier stuff we already added to Cake, we basically have this already (eBPF can map traffic to tin and flow however it pleases). > --- enqueue/dequeue split --- > 5: Wait for shaper on earliest-scheduled subscriber link with waiting traffic (borrow sch_fq's rbtree?). > 6: Wait for shaper on aggregate backhaul link (congestion can happen here too). > 7: Choose one of subscriber's queues, apply AQM and deliver a packet (mostly like Cake does). > > If that seems reasonable, we can treat it as a baseline spec for > others' input. I think that the minimum modification to the existing CAKE code base would be to just support an arbitrary (configurable) number of tins that (eBPF) tc filters can map traffic into. You'd need a better way of selecting the next tin to service; I think rbtree is a reasonable choice here. And probably some way of avoiding allocating CAKE_FLOWS queues per tin if there are a lot of them. The fq structure used for WiFi (which was inspired by CAKE in the first place) solves this by allocating one big batch of queues and mapping them to tins as packets are assigned to them; this is in fq_impl.h. The above is obviously minded on what would be the minimum required to support an "ISP shaper" use case in the existing CAKE qdisc. If you're designing a whole new qdisc, obviously there are other ways of structuring things; so see the above more as inspiration... :) -Toke