From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk0-x244.google.com (mail-qk0-x244.google.com [IPv6:2607:f8b0:400d:c09::244]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id AFB2D21F2DB; Wed, 10 Jun 2015 17:10:38 -0700 (PDT) Received: by qkby64 with SMTP id y64so7271926qkb.3; Wed, 10 Jun 2015 17:10:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=9gsr9aPfMQKgLFS1mGifobXrgjzI8ggbyYXy+A0Rpnk=; b=dV20AVsRD6XWRdueoHs0pjOv8jPtfghEF5iI8AUDywGlJ2f2lnqASbaqI3M4YjXlvV 9O95LinlHGH3Eze/LLPIZ/hWdYoNCFaysHN7jT+KOGcM/qndZr8k0pA+bozXrApZpK35 RKVMCEVmP+41n2DSyf1t8QDXP+p8o07pWk5PqtV/CZi8eWk3rR3AAk2zSqb1kReUiL0T 2ntCpmU//OXa6Lnd6e1C1EaLQWvVOOI5gX8fcWxABb0RTCjJlfjJ4Y2sjx137knj4469 6K+2cue+oTfMrn58GNo4EyiDS5v0xCxV5c2Wy64/bLxFHMyct9lfz2nLVhcTVCJP4Ne/ e/xw== MIME-Version: 1.0 X-Received: by 10.55.51.78 with SMTP id z75mr12516444qkz.101.1433981436917; Wed, 10 Jun 2015 17:10:36 -0700 (PDT) Received: by 10.140.105.6 with HTTP; Wed, 10 Jun 2015 17:10:36 -0700 (PDT) In-Reply-To: References: Date: Wed, 10 Jun 2015 17:10:36 -0700 Message-ID: From: Daniel Havey To: Sebastian Moeller Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Mailman-Approved-At: Fri, 12 Jun 2015 10:11:58 -0700 Cc: cake@lists.bufferbloat.net, "cerowrt-devel@lists.bufferbloat.net" , bloat Subject: Re: [Cerowrt-devel] [Cake] active sensing queue management X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jun 2015 00:11:07 -0000 Hmmm, maybe I can help clarify. Bufferbloat occurs in the slowest queue on the path. This is because the other queues are faster and will drain. AQM algorithms work only if they are placed where the packets pile up (e.g. the slowest queue in the path). This is documented in Kathy and Van's CoDel paper. This is usually all well and good because we know where the bottleneck (the slowest queue in the path) is. It is the IP layer in the modem where the ISP implements their rate limiter. That is why algorithms such as PIE and CoDel are implemented in the IP layer on the modem. Suppose the full committed rate of the token bucket rate limiter is 8 Mbps. This means that the queue at the IP layer in the modem is capable of emitting packets at 8 Mbps sustained rate. The problem occurs during peak hours when the ISP is not providing the full committed rate of 8 Mbps or that some queue in the system (probably in the access link) is providing something less than 8 Mbps (say for sake of discussion that the number is 7.5 Mbps). We know that (see Kathy and Van's paper) that AQM algorithms only work when they are placed at the slowest queue. However, the AQM is placed at the queue that is capable of providing 8 Mbps and this is not the slowest queue. The AQM algorithm will not work in these conditions. This is what is shown in the paper where the CoDel and PIE performance goes to hell in a handbasket. The ASQM algorithm is designed to address this problem. On Wed, Jun 10, 2015 at 1:54 PM, Sebastian Moeller wrote: > Hi Dave, > > > On Jun 10, 2015, at 21:53 , Dave Taht wrote: > >> http://dl.ifip.org/db/conf/networking/networking2015/1570064417.pdf >> >> gargoyle's qos system follows a similar approach, using htb + sfq, and >> a short ttl udp flow. >> >> Doing this sort of measured, then floating the rate control with >> "cake" would be fairly easy (although it tends to be a bit more >> compute intensive not being on a fast path) >> >> What is sort of missing here is trying to figure out which side of the >> bottleneck is the bottleneck (up or down). > Yeah, we never did figure out how to separate the up from the downlink. However, we just consider the access link as a whole (up + down) and mark/drop according to ratios of queuing time. Overall it seems to work well, but, we never did a mathematical analysis. Kind of like saying it's not a "bug", it is a feature. And it this case it is true since both sides can experience bloat. > Yeah, they relay on having a reliable packet reflector upstream o= f the =E2=80=9Cbottleneck=E2=80=9D so they get their timestamped probe pack= ets returned. In the paper they used either uplink or downlink traffic so f= iguring where the bottleneck was easy at least this is how I interpret =E2= =80=9CExperiments were performed in the upload (data flowing from the users= to the CDNs) as well as in the download direction.". At least this is what= I get from their short description in glossing over the paper. > Nice paper, but really not a full solution either. Unless the ISP= s cooperate in supplying stable reflectors powerful enough to support all d= ownstream customers. But if the ISPs cooperate, I would guess, they could e= radicate downstream buffer bloat to begin with. Or the ISPs could have the = reflector also add its own UTC time stamp which would allow to dissect the = RTT into its constituting one-way delays to detect the currently bloated di= rection. (Think ICMP type 13/14 message pairs "on steroids", with higher re= solution than milliseconds, but for buffer bloat detection ms resolution wo= uld probably be sufficient anyways). Currently, I hear that ISP equipment w= ill not treat ICMP requests with priority though. Not exactly. We thought this through for some time and considered many angles. Each method has its advantages and disadvantages. We decided not to use ICMP at all because of the reasons you stated above. We also decided not to use a "reflector" although as you said it would allow us to separate upload queue time from download. We decided not to use this because it would be difficult to get ISPs to do this. Are final choice for the paper was "magic" IP packets. This consists of an IP packet header and the timestamp. The IP packet is "self addressed" and we trick the iptables to emit the packet on the correct interface. This packet will be returned to us as soon as it reaches another IP layer (typically at the CMTS). Here's a quick summary: ICMP -- Simple, but, needs the ISP's cooperation (good luck :) Reflector -- Separates upload queue time from download queue time, but, requires the ISP to cooperate and to build something for us. (good luck :) Magic IP packets -- Requires nothing from the ISP (YaY! We have a winner!), but, is a little more complex. > Also I am confused what they actually simulated: =E2=80=9CThe mod= ems and CMTS were equipped with ASQM, CoDel and PIE,=E2=80=9D and =E2=80=9C= However, the problem pop- ularly called bufferbloat can move about among ma= ny queues some of which are resistant to traditional AQM such as Layer 2 MA= C protocols used in cable/DSL links. We call this problem bufferbloat displ= acement.=E2=80=9D seem to be slightly at odds. If modems and CTMS have dece= nt AQMs all they need to do is not stuff their sub-IP layer queuesand be do= ne with it. The way I understood the cable labs PIE story, they intended to= do exactly that, so at least the =E2=80=9Cbuffer displacement=E2=80=9D rem= edy by ASQM reads a bit like a straw man argument. But as I am a) not of th= e cs field, and b) only glossed over the paper, most likely I am missing so= mething important that is clearly in the paper... Good point! However, once again it's not quite that simple. Queues are necessary to absorb short term variations in packet arrival rate (or bursts). The queue required for any flow is given by the bandwidth delay product. Since we don't know the delay we can't predict the queue size in advance. What I'm getting at is the equipment manufacturers aren't putting in humongous queues because they are stupid, they are putting them there because in some cases you might really need that large of a queue. Statically sizing the queues is not the answer. Managing the size of the queue with an algorithm is the answer. :) > > Best Regards > Sebastian > >> >> -- >> Dave T=C3=A4ht >> What will it take to vastly improve wifi for everyone? >> https://plus.google.com/u/0/explore/makewififast >> _______________________________________________ >> Cake mailing list >> Cake@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/cake >