From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-01-ewr.dyndns.com (mxout-123-ewr.mailhop.org [216.146.33.123]) by lists.bufferbloat.net (Postfix) with ESMTP id F3F622E001D for ; Sun, 3 Apr 2011 11:04:01 -0700 (PDT) Received: from scan-01-ewr.mailhop.org (scanner [10.0.141.223]) by mail-01-ewr.dyndns.com (Postfix) with ESMTP id 934CD1F4B83 for ; Sun, 3 Apr 2011 18:04:01 +0000 (UTC) X-Spam-Score: -1.0 (-) X-Mail-Handler: MailHop by DynDNS X-Originating-IP: 209.85.215.171 Received: from mail-ey0-f171.google.com (mail-ey0-f171.google.com [209.85.215.171]) by mail-01-ewr.dyndns.com (Postfix) with ESMTP id 290861F47D5 for ; Sun, 3 Apr 2011 18:03:58 +0000 (UTC) Received: by eydd26 with SMTP id d26so1871377eyd.16 for ; Sun, 03 Apr 2011 11:03:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:subject:mime-version:content-type:from :in-reply-to:date:cc:content-transfer-encoding:message-id:references :to:x-mailer; bh=UNl3po5HDG8sMOYMZFG4y9MXbyxDogtAU+bfNxa6480=; b=iqHm5io+mxassw20aCf0YzM8mwSS8iNh5zdxUpiyRgXPV4EEiUchWpJRAsA/7JI7AS OP9+lC6h4Kho+SXVKyN/LOds6i6CbLo+0nWA6OJOq/CQfwoGf5CvR2cjY5OirR9hABZw djPMTlnpYeBmx5fj0nUt1l2frGKfiuhhKFQAg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=Q+RX0bR/x6ddmuIKPT+qgVBU4mAhVE/xucUI8pip5Q+WB1P2mpa+7yvc8u3xb65j/P Z1IpTzMNqIosLJoqv9HbVZUF08V2VNxk7mYyvPzomGhnTzXVm33febWYzVRmkg3mTGhn dXl6ioEQ+/Um4C0NPvllmt1hfBSxJRE9NGNtw= Received: by 10.14.126.200 with SMTP id b48mr3045887eei.54.1301853837402; Sun, 03 Apr 2011 11:03:57 -0700 (PDT) Received: from [192.168.239.42] (xdsl-83-150-84-172.nebulazone.fi [83.150.84.172]) by mx.google.com with ESMTPS id x14sm2790982eeh.27.2011.04.03.11.03.55 (version=TLSv1/SSLv3 cipher=OTHER); Sun, 03 Apr 2011 11:03:56 -0700 (PDT) Mime-Version: 1.0 (Apple Message framework v1084) Content-Type: text/plain; charset=us-ascii From: Jonathan Morton In-Reply-To: <7ik4fbz40t.fsf@lanthane.pps.jussieu.fr> Date: Sun, 3 Apr 2011 21:03:53 +0300 Content-Transfer-Encoding: quoted-printable Message-Id: <316BFE9B-4C15-4462-8E62-223C0A8F498D@gmail.com> References: <7imxklz5vu.fsf@lanthane.pps.jussieu.fr> <1300999472.12456.22.camel@amd.pacdat.net> <7ik4fbz40t.fsf@lanthane.pps.jussieu.fr> To: Juliusz Chroboczek X-Mailer: Apple Mail (2.1084) Cc: bloat@lists.bufferbloat.net Subject: Re: [Bloat] Thoughts on Stochastic Fair Blue X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Apr 2011 18:04:02 -0000 On 3 Apr, 2011, at 7:35 pm, Juliusz Chroboczek wrote: > Sorry for the delay, but I wanted to think this over. >=20 >> My second observation is that marking and dropping both happen at the >> tail of the queue, not the head. This delays the congestion = information >> reaching the receiver, and from there to the sender, by the length of >> the queue >=20 > It's very difficult to drop at the head of the queue in SFB, since = we'd > need to find a suitable packet that's in the same flow. Since SFB = makes > heroic efforts to keep the queue size small, this shouldn't make much = of > the difference. >=20 > Your suggestion is most certainly valid for plain BLUE. If the queue is overfull, then the efforts at proactive congestion = control have failed and tail-dropping is fine. I was thinking more of the probabilistic marking/dropping that occurs = under normal conditions, which should occur at the head of the queue to = minimise feedback latency. The head of the queue needs to decrement the = bucket counters anyway, so it shouldn't be extra overhead to move the = probability check here. >> Another observation is that packets are not re-ordered by SFB, which >> (given the Bloom filter) is potentially a missed opportunity. >=20 > What's your suggestion? >=20 >> However, they can be re-ordered in the current implementation by >> using child qdiscs such as SFQ >=20 > I don't see how that buys you anything. SFB is very aggressive with > dropping packets when under congestion, and the packet drop happens > *before* the child qdisc gets a chance to see the packet; hance, = putting > SFQ below SFB won't buy you much, it'll just reorder packets in the > small queue that SFB allows. Or am I missing something? To an analogue modem or a GPRS connection, even a single default SFB = bucket can take a significant time to drain. This gets worse if there = are several independent flows. (And this is not necessarily abuse by = the end-user, but a legitimate result of going out of range of the 3G = signal while using it as such.) Consider the extreme case, where you have a dozen flows each filling = their bucket with a dozen packets, and then a DNS reply packet arrives. = Without packet reordering, the DNS packet must wait for 144 packets to = get out of it's way, which could take tens of seconds, so the DNS = resolver will time out. With it, it only has to wait for 12 packets = (possibly less still with DRR, which I haven't investigated in detail), = which is fast enough that the connection, though very sluggish, is still = functional. Under a less extreme scenario, suppose I'm downloading an iApp update to = my phone, and meanwhile it decides to check for new email - IMAP being a = relatively chatty request-response protocol, without large amounts of = data being transferred. Then the train goes under a bridge and suddenly = I'm on GPRS, so the tower's queue fills up with iApp. With packet = reordering, the IMAP protocol keeps going fast enough to download a new = email in a few seconds, and then happily get out of the way. Without = it, the iApp download monopolises the connection and the IMAP job will = take minutes to complete. Email is a canonical low-bandwidth = application; users reasonably expect it to Just Work. Bear in mind also that my 3G data subscription, though unlimited in = per-month traffic, is limited to 500Kbps instantaneous, and is therefore = only one order of magnitude faster than an analogue modem. Packet = reordering wouldn't have such a dramatic effect on functionality as at = the slower speed, but it would still make the connection feel better to = use. Given that IP packet aggregation into over-the-air frames is = normal practice, this would effective put one packet from each of = several flows into each frame, which is also a nice form of interleaving = that helps to reduce the impact of transitory unreachability. If you kept a list of packets in each bucket, rather than just the count = of them, then you could simply iterate over the buckets (in all rows) = when dequeueing, doing a *better* job than SFQ because you have a = different hash salt in each row. You would then need to delete the = entry from the relevant bucket in all of the rows - this *is* extra = overhead, but it's probably no greater than passing it down to SFQ. - Jonathan