From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.22]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 5143A21F222 for ; Sun, 29 Dec 2013 03:28:09 -0800 (PST) Received: from [192.168.2.43] ([79.202.32.60]) by mail.gmx.com (mrgmx101) with ESMTPSA (Nemesis) id 0Ll0tl-1VPXkG3nvB-00akfH for ; Sun, 29 Dec 2013 12:28:06 +0100 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\)) From: Sebastian Moeller In-Reply-To: Date: Sun, 29 Dec 2013 12:28:04 +0100 Content-Transfer-Encoding: quoted-printable Message-Id: <8B401165-11EF-4C80-8972-FC09A3C9C477@gmx.de> References: <2dda55ee-b40e-45c3-b973-2494f90ca182@email.android.com> To: Dave Taht X-Mailer: Apple Mail (2.1510) X-Provags-ID: V03:K0:cdH5iNaHH1zO2X1ssC5Z7oMD3uTyxJJhBBM3cc92nDHmZXmYJV4 wBoNSCyZ+RHTVQQsKbVUPhpuX4jKpIW2q9EdG5Sb3Fu7eclj6FqnqfzRdvGolf0xnZHVC06 300Tlo78lZz0kQGLDqMz67vulkuN3DVWYIhP86JaLoD0IW0bxAHTIKUGr8mlUJXjoUc1VsJ yM4nKy57svqRSX7ZnLW6g== Cc: cerowrt-devel Subject: Re: [Cerowrt-devel] SQM Question #1: How does SQM shape the ingress? X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Dec 2013 11:28:09 -0000 Hi Dave, hi Rich, hi list, On Dec 29, 2013, at 08:57 , Dave Taht wrote: > This is one of the harder questions for even experts to grasp: that > ingress shaping is indeed possible, and can be made to work well in > most circumstances. >=20 > Mentally I view it as several funnels. You have a big funnel at the > ISP, and create a smaller one on your device. It is still possible for > the big funnel at the ISP to fill up, but if your funnel is smaller, > you will fill that one up sooner, and provide signalling back to make > sure the bigger funnel at the ISP doesn't fill up overmuch, or > over-often. (It WILL fill up at the ISP at least temporarily fairly > often - a common buffer size on dslams is like 64k and it doesn't take > many iw10 flows to force a dropping scenario) That is a good model to think about it. We assume that only the = queue in front of the bottleneck actually increases due to traffic, but = unfortunately that is not really true. Any link with a fast to slow = transition can accumulate packets in its queue if the input (burst) is = faster than the output bandwidth, the larger the in out speed delta the = higher the probability of accumulating packets in the queue and the = longer the queue. Since the DSLAM/CTS has a larger speed delta than our = artificial bottleneck it will do part of the buffering (and currently = not in a very smart way). In steady state conditions however the = DSLAM/CTS queue should empty and we have full control over the queue = back. This actually argues for shaping the ingress more aggressively, = or? >=20 > Note: I kind of missed the conversation about aiming for 85% limits, > etc. It's in my inbox awaiting time to read it. TL; DR version (too long; didn't read): traditional shaping = advise did only account for ATM 48 in 53 framing by assuming 10% lost = bandwidth and added 5% for miscellaneous effects (for late packets the = total worst case ATM quantization ad overhead loss ~16%, so 85% is not = the worst advise to give; but since we now can account for the link = layer properly, I argue we should aim higher. Proof -> Pudding: this = "hypothesis" of mine needs to actually survive contact with the real = world. So I invite everyone to report back the satisfactory shaping % = they ended up with). >=20 > On egress, in cero, you have total control of the queue AND memory for > buffering and can aim for nearly 100% of the rated bandwidth on the > shaper. You have some trouble at higher rates with scheduling > granularity (the htb quantum). >=20 > On ingress, you are battling your ISP's device (memory, workload, your > line rate, queuing, media acquisition, line errors) for an optimum, > it's sometimes several ms away (depending on technology), and the > interactions between these variables are difficult to model, so going > for 85% as a start seems to be a good idea. I can get cable up to > about 92% at 24mbit. Cable is doubly tricky, as access to the cable is time shared so = we have no real guarantee of link bandwidth at all (now I hope cable = ISPs will fix nodes with too high congestion). I think we should collect = numbers from the cerowrt users and empirically give a recommendation = that is at least pertly based on real data :) >=20 > I have trouble recommending a formula for any other technology. >=20 > In fact ingress shaping is going to get harder and harder as all the > newer, "faster" technologies have wildly variable rates, and the > signalling loop is basically broken between the dumb device (with all > the knowledge about the connection) and the smarter router (connected > via 1gig ethernet usually) It would be so sweet if there was a canonical way to inquire = current up and downlink speeds from CPE... >=20 > My take on it now (as well as then) is we need to fix the cmts's and > dslams (and wireless, wifi, gpon, etc). DOCSIS 3.1 mandates pie and > makes other AQM and packet scheduling technologies possible. The > CMTSes are worse than the devices in many cases, but I think the > cmts's are on their way to being fixed - but the operators need to be > configuring them properly. I had had hope the erouters could be fixed > sooner than docsis 3.1 will ship=85 >=20 > One interesting note about dslams I learned recently (also from free > who have been most helpful) is that most DSL based triple play > services uses a VC (virtual circuit) for TV and thus a simpleminded > application of htb on ingress won't work in their environment. In Germany the major telco uses vlan tagging to separate iptv = packets from the rest; how does that work with HTB? (I have no iptv = service so can not test at all, and it seems that people not using the = telco's router have a hard time getting iptv to work reliably) >=20 > (I think it might be possible to build a smarter htb that could > monitor the packets it got into it's direct queue and subtract that > bandwidth from it's headline queues, but haven't got back to them on > that). Also gargoyle's ACC-like approach might work if better > filtered. (from looking at the code last year I figured it wouldn't > correctly detect inbound problems on cable in particular, but I think > I have a filter for that now) I would really like to get away from > requiring a measurement from the user and am willing to borrow ideas > from anyone. >=20 > http://gargoylerouter.com/phpbb/viewtopic.php?f=3D5&t=3D2793&start=3D10 The gargoyle approach is to monitor the CMTS queue by sending = periodic ping probes and adjusting its ingress shaping to keep the CMTS = queue short. This relies on the CMTS being dumb, any ICMP slow-pathing = or flow based queueing will throw a wrench into ACC as far as I = understand it. >=20 > Incidentally I liked gargoyle when I tried it. Those of you that have > secondary routers here might want to give it a go. (They did sfqred > briefly then went back to sfq + acc) >=20 > In reading what I just wrote I'm not sure how to make any of this > clear to mom. "Scheduling granularity"? "Mom" typically will not use openWRT let alone ceroWRT :) as = much as I dispose the unfairness of it, this will only become really = useful once commercial home-router manufacturers/programmers will = include something similar in their products... >=20 > I like what sebastian wrote below, but I think a picture or animation > would make it clearer. >=20 > On Sat, Dec 28, 2013 at 11:23 PM, Sebastian Moeller = wrote: >> Rich Brown wrote: >>> As I write the SQM page, I find I have questions that I can=92t = answer >>> myself. I=92m going to post these questions separately because = they=92ll >>> each generate their own threads of conversation. >>>=20 >>> QUESTION #1: How does SQM shape the ingress? >>>=20 >>> I know that fq_codel can shape the egress traffic by discarding = traffic >>> for an individual flow that has dwelt in its queue for too long >>> (greater than the target). Other queue disciplines use other metrics >>> for shaping the outbound traffic. >>>=20 >>> But how does CeroWrt shape the inbound traffic? (I have a sense that >>> the simple.qos and simplest.qos scripts are involved, but I=92m not = sure >>> of anything beyond that.) >>=20 >> So ingress shaping conceptually works just as egress shaping. = The shaper accepts packets at any speed from both directions but limits = the speed used for transmitting them. So if your ingress natural = bandwidth would be 100Mbit/s you would set the shaper to say 95Mbit/s, = so the shaper will create an internal artificial bottleneck just in = front of its queue, so that it can control the critical queue. >> Technically, this works by creating an artificial intermediate = functional block device? (IFB), moving all ingress traffic to this = device and setting up classification and shaping on that device. >>=20 >> I hope this helps... >>=20 >> Sebastian >>>=20 >>>=20 >>> _______________________________________________ >>> Cerowrt-devel mailing list >>> Cerowrt-devel@lists.bufferbloat.net >>> https://lists.bufferbloat.net/listinfo/cerowrt-devel >>=20 >> Hi Rich, >>=20 >>=20 >> -- >> Sent from my Android phone with K-9 Mail. Please excuse my brevity. >> _______________________________________________ >> Cerowrt-devel mailing list >> Cerowrt-devel@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cerowrt-devel >=20 >=20 >=20 > --=20 > Dave T=E4ht >=20 > Fixing bufferbloat with cerowrt: = http://www.teklibre.com/cerowrt/subscribe.html