From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-x231.google.com (mail-wi0-x231.google.com [IPv6:2a00:1450:400c:c05::231]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id BCAF621F339 for ; Tue, 20 May 2014 07:34:20 -0700 (PDT) Received: by mail-wi0-f177.google.com with SMTP id f8so1059898wiw.10 for ; Tue, 20 May 2014 07:34:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=ZOMq9oF2AiRpKp9iyI1XT0MUkY6OZqQtAhm2CMEBB+E=; b=fXUUAOLVoHavHOAxRKdMif/LlCXLv2aeUoP5wrOsTrh59HEK6CYP6R/W3CrDSIsm2Z OjHqaE/7hvMOZ5AbQ1/WvD0vPyWSeEVpUhaeamT0HSUq1mbz8Y/uAG+Ppq3k44y4h2Ww x0pLCbcBxjfM77W/p9RItGcSmis1N4LGvn6IfiMZ6hx11pMWrC5L+ZvAiUrI7RTEN3zj DIvAskuzUb1WFeIUsvpGJjOvVGEqUsnkdWfkW/CTEQeDzbmTfT8lL9VB6Yio3PgsRska xXEUlewr2bdPs3f17b5MPplD26HG+uWFAXuOERf9iD1d21RmOCQSPSJdLPOJyzZ7ZibU VA2w== MIME-Version: 1.0 X-Received: by 10.180.14.72 with SMTP id n8mr4434772wic.53.1400596458850; Tue, 20 May 2014 07:34:18 -0700 (PDT) Received: by 10.216.207.82 with HTTP; Tue, 20 May 2014 07:34:18 -0700 (PDT) In-Reply-To: References: Date: Tue, 20 May 2014 07:34:18 -0700 Message-ID: From: Dave Taht To: Andrew McGregor Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Cc: Richard Edmands , codel Subject: Re: [Codel] Floating an Idea. ip_fq_codel X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 20 May 2014 14:34:21 -0000 What I'd done for torrent was to rely on either a l7 classifier or the user to mark packets as CS1, (background), and have a 3 level shaper like yours that distinguished between diffserv classes. Obviously you can't rely on users marking their traffic appropriately... and yes, per dest fairness is often a good idea. I've asked brahm to make CS1 the default rather than optional. Secondly, at higher rates, I generally found torrent and web traffic co-existed pretty well, stuff in slow start mode (web) blasting stuff in congestion avoidance out of the way. However ledbat behaves like reno with low delays on the link, and gets reprioritized to have the same priority as other flows This is universal - aqm (red) - or FQ (sfq) do this to torrent, also. http://perso.telecom-paristech.fr/~drossi/paper/rossi13tma-b.pdf (a mitigating fact is latencies stay low for everybody, even with 25 torrent flows going full blast at 20mbit I hardly notice) Some comments on your script. 0) Yes, getting the ppoe and atm framing calculation correct for htb is a PITA. (see tons of traffic on it on the cerowrt-devel list) 1) a problem in your script is that it only applies to ipv4 rather than ipv6 traffic. You should have at least a filter that sends that to a good bucket (htb default 13 is sending ipv6 to the last bucket) 2) A feature of the native fq_codel hash is that it is unique per instance, by applying a random number to the derived hash. I'd like to see syntax added to tc get the same random number from custom hashes (and also hashing on mac address) . eg, dst-mac,random 2a) An error, if you were using this filter, is the divisor has to be equal to or less than the number of flows defined in the qdisc. #$tc filter add dev $ext prio 1 protocol ip parent 1: handle 100 flow hash keys nfct-src,dst divisor 2048 baseclass 1:11 3) Running at slow rates is a PITA, as is framing compensation. $tc class add dev $ext parent 1:1 classid 1:11 htb rate 300kbit ceil $ext_up overhead 40 mtu 1492 mpu 53 linklayer atm 4) Running any qdisc at a very low rate is problematic. Here, I don't think this is very correct. $tc qdisc add dev $ext parent 1:11 handle 11: fq_codel noecn target 25ms interval 75ms quantum 512 flows 512 limit 300 limit 300 # Unless you are running with very low memory or at very low rates it's best to leave this higher. You ARE running at a very low rate. quantum 512 # generally I've settled on quantum 300 as a good value for low rates. flows 512 # to maximize aqm behavior, flows 16 works pretty good. to minimize the birthday problem, 1024 works pretty good. And in either case it needs to match the custom filter you are using's divisor. target 25ms, interval 75ms. # The overall recomendation in the draft is that target be set to 5-10% of interval, and that interval correspond to the typical path length. If you are on dsl, that initial path length can be quite large.... In practice, with htb, the path length seems to get much larger at low rates (htb & codel each buffers up a packet), so I THINK (but this would need some testing) a more correct value for 300kbit interval is 150ms, and target should account for a MTU's worth of packets, or 40ms or so.... BUT, as we are now combining multiple qdiscs in parallel in this script, it's feasible your target is correct *compared to your overall 900kbit bandwidth* we definately need to do more work at really low speeds. On Tue, May 20, 2014 at 3:16 AM, Andrew McGregor wro= te: > That's about what constitutes a flow. fq_codel as implemented in linux > works per (source ip, dest ip, protocol, source port, dest port) 5-tuple. > Linux should probably support multiple flow hashing algorithms in the > kernel. > > > On Tue, May 20, 2014 at 7:15 PM, Richard Edmands > wrote: >> >> In my environment we've got a fair chunk of torrent usage happening (+ >> gaming) and with fq_codel giving the advantage to whichever individual c= ould >> open up as many connections as possible the entire situation imploded ve= ry >> quickly. >> So to balance this out I used htb to implement the IP part of this >> (actually not really, i made groups of ip's which belonged to individual= s) >> and stuck fq_codel on top of the divided setup. >> With this system what now happens is each IP now gets equal utilization = of >> the link (actually, i'm a lazy hack. I only implemented the uplink secti= on) >> which prevents the advantage of opening up as many connections as possib= le. >> Now when an individual decides to go nuts, they're limited to what is >> available to them without harming everyone else, without compromising >> maximum possible speed. >> >> I have had this running in my environment for the past month and WOW. >> >> See pastebin'd implementation. >> >> http://pastebin.com/hXtzFL9f >> >> _______________________________________________ >> Codel mailing list >> Codel@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/codel >> > > > _______________________________________________ > Codel mailing list > Codel@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/codel > --=20 Dave T=C3=A4ht NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_= indecent.article