From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "mout.gmx.net", Issuer "TeleSec ServerPass DE-1" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id BC04E21F390; Fri, 12 Jun 2015 09:03:10 -0700 (PDT) Received: from u-083-c193.eap.uni-tuebingen.de ([134.2.83.193]) by mail.gmx.com (mrgmx101) with ESMTPSA (Nemesis) id 0LgdBZ-1ZQSUC3f8v-00nylD; Fri, 12 Jun 2015 18:03:01 +0200 Content-Type: text/plain; charset=windows-1252 Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) From: Sebastian Moeller In-Reply-To: Date: Fri, 12 Jun 2015 18:02:30 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <3C1D873F-EFDD-4B62-8E56-C1BDFE61890E@gmx.de> References: <6594DC36-738D-4E1D-857E-701B2B11388F@gmx.de> To: Daniel Havey X-Mailer: Apple Mail (2.1878.6) X-Provags-ID: V03:K0:ZPd3KPSaIuOehBo4EjoWXgvVAk9BLLDgz8J4ALtC+3ra7oiRS2G f5YSWiS+Zj9uus29qM6fg07NUWeX3JS7Fh6fNSMpWnbiWCIeGL7jXPCH2K3Aq2joqkLqSzr lDX4cVC3f3dFqex7+W31PbivBJmXYb1CvOHGL3quAa7UA4fwnIF+vvh94lSkjSubT5mpTPt PKyWBK2EA+LhB+s2yQXMw== X-UI-Out-Filterresults: notjunk:1;V01:K0:7qgcxIBF7Tg=:mstQxfjcsEnNRn+GkNiYuH egwjwTG+JoCLQoWQ+W0xxObcA45SIJdgff6H+mFHE4RYKLDjOdKfZT51b4kA0wWC/1DkhzcHy 4K+X+WqywkikSTu+aoMchjWE878S8mkFsdP9lk8mYVsCAQwyxxM12Bdey5uN2yr6qozVKCIBr foF3r9GaYAgFtRC1GKoxQmYHpBHLgm4smGDb7lzJAtyurO2LEtSfQJwFAscruERPBNBDhFB6n 1U3NrfbAJNusA2Z3z5WrpCOlDVwuipW2pJNjdkuDA03mEdpJ7SIgKcUrjIuyU8MmJkQItorR4 FchDbsBcqxTAK0vAcXLpPeNiqzxwppi7nmrxCwIwr56YHIWF3ypgur0tsd2/H0AWHZlHs4KYo An4+gl1utmXIc1hKtc8nLowISME3AMseCQFNQkKHTXYkCDLEJz8g1AOJuoSMilhl4Czv5fEbF zr7elkMcmznW/dApktWKc+YT/j5Wl13YzBRtSv5metVHHiYnA3HZg9y0ENaaxLZQmFypiSvvN zE5yPHdA67MgnTglsFl8TW4SNHlfHgxsU8ZQURxE1kwJL3ySZDi7NHXpumJlxxGq2f/hUyNuW FFFaDu6oLJkndulXXGsKvXtWBIz/S7ldmDatB5zPfr85n0MDngpjerg/d+rH9/mgf8b3ofm/I pFG1MmpagyVfEfEdyn96DgSmSrtfAUdsd/1x3JK53T5yZgDH8195Jq/dGWhf/wCDGlIM= Cc: cake@lists.bufferbloat.net, Greg White , "cerowrt-devel@lists.bufferbloat.net" , bloat Subject: Re: [Cerowrt-devel] [Cake] active sensing queue management X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Jun 2015 16:04:07 -0000 Hi Daniel, On Jun 12, 2015, at 17:02 , Daniel Havey wrote: > On Thu, Jun 11, 2015 at 12:27 AM, Sebastian Moeller = wrote: >> [...] >> Except that DOCSIS 3.1 pie in the modem does not work that = way. As I understand = http://www.cablelabs.com/wp-content/uploads/2014/06/DOCSIS-AQM_May2014.pdf= section 3.2 MAP PROCESSING REQUIREMENTS, cable-modem pie will not stuff = data into the lower layers until it has received a grant to actually = send that data, hence no uncontrolled sub-IP layer buffer bloat possible = (unless there are severe RF issues that take out data during = transmission). So at least for the upstream direction docsis 3.1 pie = will not suffer from buffer displacement, if I understand the cable labs = white paper correctly. >=20 > Hmmm, interesting. Are you sure? I'm a CS not a EE so the PHY layer > is like black magic to me. However, I still think (although I am > willing to be convinced otherwise by someone with superior knowledge > :)) that the IP layer puts packet into a MAC layer queue. Then the > MAC layer makes a queue depth based request for bandwidth in order to > serialize and send the data. I am not sure, but maybe Greg White (CCd) can help us out here? = @Greg, is it right that the docsis3.1 pie implementation will keep a = close lid on how many packets/bytes are queued in lower layers of the = stack?=20 >=20 > If somebody really knows how this works, please help! :^) Is the > upload of a docsis 3.1 modem really unbloatable below the IP layer? I > just want to know for my own edification :) >=20 >> Your solution still might be a valuable add-on to control the = downstream buffer bloat in addition. > I agree! If that reading of the cablelabs paper is correct then this > nicely solves the upload vs. download problem and we don't really need > BQL either. If it is not true, then we use BQL on the egress to solve > the upload bloat problem and ASQM to solve the download bloat problem. > Perfect solution! I love it when a plan comes together! :^) ASFAIK, BQL so far is only implemented in ethernet drivers, so = if your uplink is egaul or slightly higher than 10, 100, 1000, or = 1000Mbps BQL with fq_codel will not need and shaper on egress and should = still hold the buffers at bay. Unfortunately often the actual egress = rates are quite off of these ethernet sleep tiers. I believe Dave Taeht = is trying to convince ethernet drivers to set their egress at = non-traditional rates, I could be wrong though... >=20 >> I also believe that free in france had a modified DSL driver = for their box that made sure sub-IP buffering was bound to a low number = of packets as well, so no displaced buffers there as well. Now it seems = that this solution for DSL was unique so far and has not caught on, but = once docsis3.1 modems hit the market upstream PIE in the modems will be = reality. >=20 > freefrance? Dave isn't that your provider? I thought they were > running fq_CoDel? > In any case, just because PIE in the modems is a reality don't be > tempted to declare the problem solved and go home. Never > underestimate the ability of the ISPs to do the wrong thing for very > good reasons :^) What happens if they don't turn it on? This is > really what I was trying to solve with ASQM. What if your provider > won't run CoDel or PIE for whatever incomprehensible reason? Then you > run ASQM and be done with it. I like your active sensing approach, I just believe that the = scenario you set-out n the paper is not fully true, so I tried to voice = my concerns. Personally I am on a vdsl-link so docsis pie or no docsis = pie, my link is still bloated and I am looking for new solutions. I like = your magic packet idea, even though my taste in these matters is = debatable ;) but I fear to work this needs to run on the modem, and thee = are only few fully opensource modems around (if any) on which to = implement your active probe. Plus on a DSL link the congestion typically = comes between DSLAM and BRAS (as the DSL link is not shared, unlike the = cable situation) and I fear the DSLAM might already return the probe = packet=85 Best Regards Sebastian >=20 >>=20 >>>=20 >>>=20 >>>=20 >>>=20 >>>=20 >>> On Wed, Jun 10, 2015 at 1:54 PM, Sebastian Moeller = wrote: >>>> Hi Dave, >>>>=20 >>>>=20 >>>> On Jun 10, 2015, at 21:53 , Dave Taht wrote: >>>>=20 >>>>> = http://dl.ifip.org/db/conf/networking/networking2015/1570064417.pdf >>>>>=20 >>>>> gargoyle's qos system follows a similar approach, using htb + sfq, = and >>>>> a short ttl udp flow. >>>>>=20 >>>>> Doing this sort of measured, then floating the rate control with >>>>> "cake" would be fairly easy (although it tends to be a bit more >>>>> compute intensive not being on a fast path) >>>>>=20 >>>>> What is sort of missing here is trying to figure out which side of = the >>>>> bottleneck is the bottleneck (up or down). >>>>=20 >>>=20 >>> Yeah, we never did figure out how to separate the up from the >>> downlink. However, we just consider the access link as a whole (up = + >>> down) and mark/drop according to ratios of queuing time. >>=20 >> This is a bit sad; why reduce say the effective uplink = bandwidth if only the downstream is contended? Not that I have a good = alternative solution that will not require help from outside boxes. >>=20 >>> Overall it >>> seems to work well, but, we never did a mathematical analysis. Kind >>> of like saying it's not a "bug", it is a feature. And it this case = it >>> is true since both sides can experience bloat. >>=20 >> Yes, but you only want to throttle traffic on the congested = leg of the link, otherwise bandwidth efficiency goes to hell if you look = at bi-direction link-saturating traffic. >>=20 >>>=20 >>>=20 >>>> Yeah, they relay on having a reliable packet reflector = upstream of the =93bottleneck=94 so they get their timestamped probe = packets returned. In the paper they used either uplink or downlink = traffic so figuring where the bottleneck was easy at least this is how I = interpret =93Experiments were performed in the upload (data flowing from = the users to the CDNs) as well as in the download direction.". At least = this is what I get from their short description in glossing over the = paper. >>>> Nice paper, but really not a full solution either. Unless the = ISPs cooperate in supplying stable reflectors powerful enough to support = all downstream customers. But if the ISPs cooperate, I would guess, they = could eradicate downstream buffer bloat to begin with. Or the ISPs could = have the reflector also add its own UTC time stamp which would allow to = dissect the RTT into its constituting one-way delays to detect the = currently bloated direction. (Think ICMP type 13/14 message pairs "on = steroids", with higher resolution than milliseconds, but for buffer = bloat detection ms resolution would probably be sufficient anyways). = Currently, I hear that ISP equipment will not treat ICMP requests with = priority though. >>>=20 >>> Not exactly. We thought this through for some time and considered >>> many angles. Each method has its advantages and disadvantages. >>>=20 >>> We decided not to use ICMP at all because of the reasons you stated >>> above. We also decided not to use a "reflector" although as you = said >>> it would allow us to separate upload queue time from download. We >>> decided not to use this because it would be difficult to get ISPs to >>> do this. >>>=20 >>> Are final choice for the paper was "magic" IP packets. This = consists >>> of an IP packet header and the timestamp. The IP packet is "self >>> addressed" and we trick the iptables to emit the packet on the = correct >>> interface. This packet will be returned to us as soon as it reaches >>> another IP layer (typically at the CMTS). >>=20 >> Ah, thanks; I did not get this from reading over your paper = (but that is probably caused by me being a layman and having read it = very quickly). Question how large is that packet on-the-wire? IP header = plus 8 byte makes me assume 20+8 =3D 28, but that is missing the = ethernet header, so rather 14+20+8 =3D 42, but isn=92t the shorts = ethernet frame 64bytes? >>=20 >>>=20 >>> Here's a quick summary: >>> ICMP -- Simple, but, needs the ISP's cooperation (good luck :) >>> Reflector -- Separates upload queue time from download queue time, >>> but, requires the ISP to cooperate and to build something for us. >>> (good luck :) >>> Magic IP packets -- Requires nothing from the ISP (YaY! We have a >>> winner!), but, is a little more complex. >>=20 >> At the cost that you only get RTT instead of two one-way = delays as one ideally would like. But as stated above if you combine = your method with say docsis3.1 pie which promises to keep the upstream = under tight control, the any RTT changes should (mainly) be caused by = downstream over-buffering (effectively allowing you use you method to = control the downstream well). >>=20 >>>=20 >>>=20 >>>> Also I am confused what they actually simulated: =93The = modems and CMTS were equipped with ASQM, CoDel and PIE,=94 and =93However,= the problem pop- ularly called bufferbloat can move about among many = queues some of which are resistant to traditional AQM such as Layer 2 = MAC protocols used in cable/DSL links. We call this problem bufferbloat = displacement.=94 seem to be slightly at odds. If modems and CTMS have = decent AQMs all they need to do is not stuff their sub-IP layer = queuesand be done with it. The way I understood the cable labs PIE = story, they intended to do exactly that, so at least the =93buffer = displacement=94 remedy by ASQM reads a bit like a straw man argument. = But as I am a) not of the cs field, and b) only glossed over the paper, = most likely I am missing something important that is clearly in the = paper... >>>=20 >>> Good point! However, once again it's not quite that simple. Queues >>> are necessary to absorb short term variations in packet arrival rate >>> (or bursts). The queue required for any flow is given by the >>> bandwidth delay product. >>=20 >> Not a CS person, but that does not ring fully true; this = basically assumes a physical medium that will dump all packets into the = buffer at one time point and send them out a full delay period later; I = think in reality packets will be serialized and hence some packet will = most likely have left the buffer already before all have arrived, so the = BDP is an more estimate of an upper bound=85 not that there is anything = wrong with designing solutions aim to handle the worst case well. >>=20 >>> Since we don't know the delay we can't >>> predict the queue size in advance. What I'm getting at is the >>> equipment manufacturers aren't putting in humongous queues because >>> they are stupid, they are putting them there because in some cases = you >>> might really need that large of a queue. >>=20 >> I thought our current pet hypothesis is that they aim for BDP = at their highest rated speeds or so, and all customers running that (huh = speed capable) equipment at lower rates are out of luck. >>=20 >>>=20 >>> Statically sizing the queues is not the answer. Managing the size = of >>> the queue with an algorithm is the answer. :) >>=20 >> No disagreement here, we just discuss the how not the why ;) >>=20 >> Best Regards >> Sebastian >>=20 >>>=20 >>>=20 >>>=20 >>>>=20 >>>> Best Regards >>>> Sebastian >>>>=20 >>>>>=20 >>>>> -- >>>>> Dave T=E4ht >>>>> What will it take to vastly improve wifi for everyone? >>>>> https://plus.google.com/u/0/explore/makewififast >>>>> _______________________________________________ >>>>> Cake mailing list >>>>> Cake@lists.bufferbloat.net >>>>> https://lists.bufferbloat.net/listinfo/cake >>>>=20 >>=20