[Cerowrt-devel] SQM Question #1: How does SQM shape the ingress?
moeller0 at gmx.de
Sun Dec 29 06:28:04 EST 2013
Hi Dave, hi Rich, hi list,
On Dec 29, 2013, at 08:57 , Dave Taht <dave.taht at gmail.com> wrote:
> This is one of the harder questions for even experts to grasp: that
> ingress shaping is indeed possible, and can be made to work well in
> most circumstances.
> Mentally I view it as several funnels. You have a big funnel at the
> ISP, and create a smaller one on your device. It is still possible for
> the big funnel at the ISP to fill up, but if your funnel is smaller,
> you will fill that one up sooner, and provide signalling back to make
> sure the bigger funnel at the ISP doesn't fill up overmuch, or
> over-often. (It WILL fill up at the ISP at least temporarily fairly
> often - a common buffer size on dslams is like 64k and it doesn't take
> many iw10 flows to force a dropping scenario)
That is a good model to think about it. We assume that only the queue in front of the bottleneck actually increases due to traffic, but unfortunately that is not really true. Any link with a fast to slow transition can accumulate packets in its queue if the input (burst) is faster than the output bandwidth, the larger the in out speed delta the higher the probability of accumulating packets in the queue and the longer the queue. Since the DSLAM/CTS has a larger speed delta than our artificial bottleneck it will do part of the buffering (and currently not in a very smart way). In steady state conditions however the DSLAM/CTS queue should empty and we have full control over the queue back. This actually argues for shaping the ingress more aggressively, or?
> Note: I kind of missed the conversation about aiming for 85% limits,
> etc. It's in my inbox awaiting time to read it.
TL; DR version (too long; didn't read): traditional shaping advise did only account for ATM 48 in 53 framing by assuming 10% lost bandwidth and added 5% for miscellaneous effects (for late packets the total worst case ATM quantization ad overhead loss ~16%, so 85% is not the worst advise to give; but since we now can account for the link layer properly, I argue we should aim higher. Proof -> Pudding: this "hypothesis" of mine needs to actually survive contact with the real world. So I invite everyone to report back the satisfactory shaping % they ended up with).
> On egress, in cero, you have total control of the queue AND memory for
> buffering and can aim for nearly 100% of the rated bandwidth on the
> shaper. You have some trouble at higher rates with scheduling
> granularity (the htb quantum).
> On ingress, you are battling your ISP's device (memory, workload, your
> line rate, queuing, media acquisition, line errors) for an optimum,
> it's sometimes several ms away (depending on technology), and the
> interactions between these variables are difficult to model, so going
> for 85% as a start seems to be a good idea. I can get cable up to
> about 92% at 24mbit.
Cable is doubly tricky, as access to the cable is time shared so we have no real guarantee of link bandwidth at all (now I hope cable ISPs will fix nodes with too high congestion). I think we should collect numbers from the cerowrt users and empirically give a recommendation that is at least pertly based on real data :)
> I have trouble recommending a formula for any other technology.
> In fact ingress shaping is going to get harder and harder as all the
> newer, "faster" technologies have wildly variable rates, and the
> signalling loop is basically broken between the dumb device (with all
> the knowledge about the connection) and the smarter router (connected
> via 1gig ethernet usually)
It would be so sweet if there was a canonical way to inquire current up and downlink speeds from CPE...
> My take on it now (as well as then) is we need to fix the cmts's and
> dslams (and wireless, wifi, gpon, etc). DOCSIS 3.1 mandates pie and
> makes other AQM and packet scheduling technologies possible. The
> CMTSes are worse than the devices in many cases, but I think the
> cmts's are on their way to being fixed - but the operators need to be
> configuring them properly. I had had hope the erouters could be fixed
> sooner than docsis 3.1 will ship…
> One interesting note about dslams I learned recently (also from free
> who have been most helpful) is that most DSL based triple play
> services uses a VC (virtual circuit) for TV and thus a simpleminded
> application of htb on ingress won't work in their environment.
In Germany the major telco uses vlan tagging to separate iptv packets from the rest; how does that work with HTB? (I have no iptv service so can not test at all, and it seems that people not using the telco's router have a hard time getting iptv to work reliably)
> (I think it might be possible to build a smarter htb that could
> monitor the packets it got into it's direct queue and subtract that
> bandwidth from it's headline queues, but haven't got back to them on
> that). Also gargoyle's ACC-like approach might work if better
> filtered. (from looking at the code last year I figured it wouldn't
> correctly detect inbound problems on cable in particular, but I think
> I have a filter for that now) I would really like to get away from
> requiring a measurement from the user and am willing to borrow ideas
> from anyone.
The gargoyle approach is to monitor the CMTS queue by sending periodic ping probes and adjusting its ingress shaping to keep the CMTS queue short. This relies on the CMTS being dumb, any ICMP slow-pathing or flow based queueing will throw a wrench into ACC as far as I understand it.
> Incidentally I liked gargoyle when I tried it. Those of you that have
> secondary routers here might want to give it a go. (They did sfqred
> briefly then went back to sfq + acc)
> In reading what I just wrote I'm not sure how to make any of this
> clear to mom. "Scheduling granularity"?
"Mom" typically will not use openWRT let alone ceroWRT :) as much as I dispose the unfairness of it, this will only become really useful once commercial home-router manufacturers/programmers will include something similar in their products...
> I like what sebastian wrote below, but I think a picture or animation
> would make it clearer.
> On Sat, Dec 28, 2013 at 11:23 PM, Sebastian Moeller <moeller0 at gmx.de> wrote:
>> Rich Brown <richb.hanover at gmail.com> wrote:
>>> As I write the SQM page, I find I have questions that I can’t answer
>>> myself. I’m going to post these questions separately because they’ll
>>> each generate their own threads of conversation.
>>> QUESTION #1: How does SQM shape the ingress?
>>> I know that fq_codel can shape the egress traffic by discarding traffic
>>> for an individual flow that has dwelt in its queue for too long
>>> (greater than the target). Other queue disciplines use other metrics
>>> for shaping the outbound traffic.
>>> But how does CeroWrt shape the inbound traffic? (I have a sense that
>>> the simple.qos and simplest.qos scripts are involved, but I’m not sure
>>> of anything beyond that.)
>> So ingress shaping conceptually works just as egress shaping. The shaper accepts packets at any speed from both directions but limits the speed used for transmitting them. So if your ingress natural bandwidth would be 100Mbit/s you would set the shaper to say 95Mbit/s, so the shaper will create an internal artificial bottleneck just in front of its queue, so that it can control the critical queue.
>> Technically, this works by creating an artificial intermediate functional block device? (IFB), moving all ingress traffic to this device and setting up classification and shaping on that device.
>> I hope this helps...
>>> Cerowrt-devel mailing list
>>> Cerowrt-devel at lists.bufferbloat.net
>> Hi Rich,
>> Sent from my Android phone with K-9 Mail. Please excuse my brevity.
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
> Dave Täht
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
More information about the Cerowrt-devel