From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ie0-x232.google.com (mail-ie0-x232.google.com [IPv6:2607:f8b0:4001:c03::232]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4A7F4201263 for ; Mon, 22 Jul 2013 14:54:54 -0700 (PDT) Received: by mail-ie0-f178.google.com with SMTP id u16so17136326iet.9 for ; Mon, 22 Jul 2013 14:54:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=4fH6t1nfNLF6OAylUWje+uTzt3uDUb5vudBl/QyZcr0=; b=SwkvriS40w6WZU3AHx8k3BlKBV+ku8TJJ+g1X2F32ioMllnUUIGfHIY8JUiZQQrv7M NmJXv5iOsVWuqzIOoRDWrBpUsACvWMIObfMgkG2bvMJepgxcfMTwd3ia2Vy6z9yllxxc m3vlUqE3baCsyjb68BPBs4hVOX8WwbvmoGtVtX+NoWLU2cnsjtAuQNsyJYXTtAp1ueey okwkPDxbhKRph1tG9/G1/s0qvctT0fpRpZf5WOUu7CDO2bjQMeJwgMOlJlUeHDZHD+/O BOGl57ZVokvUhvvvmt6IHUCvzdFYN2b79J9bY6Xyp/hOGm5IOWZFWgr8zE909diXMYD6 5NOw== MIME-Version: 1.0 X-Received: by 10.50.50.202 with SMTP id e10mr11133980igo.54.1374530093298; Mon, 22 Jul 2013 14:54:53 -0700 (PDT) Received: by 10.64.98.162 with HTTP; Mon, 22 Jul 2013 14:54:53 -0700 (PDT) In-Reply-To: <20130722212156.GA6511@thyrsus.com> References: <20130722212156.GA6511@thyrsus.com> Date: Mon, 22 Jul 2013 14:54:53 -0700 Message-ID: From: Dave Taht To: esr@thyrsus.com Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Cc: aqm@ietf.org, bloat Subject: Re: [Bloat] The Remy paper (was: Re: CS244's work on netflix streaming) X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 22 Jul 2013 21:54:54 -0000 On Mon, Jul 22, 2013 at 2:21 PM, Eric S. Raymond wrote: > Dave Taht : >> Also the discussion of "remy" going on is pretty nifty. >> >> https://plus.google.com/u/0/103530621949492999968/posts/2L9e4kxo9y3 > > Thanks for that link. I've been mad curious what your tak on the Remy > paper was ever since I read it myself. Because while I didn't spot any > obvious bogons, you know the domain far better than I do. Well, it's a little presently computationally intensive for this decade... > The only thing about their methods that troubles me is a suspicion that t= he > generated algorithms might be overspecialized - that is, very vulnerable > to small changes in traffic statistics. They point that out too, in (for example) in fig 11. I LIKE fig 11, I keep thinking about it - although codel is designed to run against varying link rates, it makes sense to optimize for the most observed actual bandwidth, if possible, and this result shows that is possible. (if not understandable). So, what if you could generate a model showing your bandwidth range at location Z is X, optimize for that, download it and run it? I also like their presentation plot mechanism inverting the queue delay against throughput to make a "best" result up and to the right (see fig 4 and 5). I'll use this in future versions of rrul in particular. The thing is, we are exploring a very difficult subject (congestion control) with the equivalent of stone knives and bearskins. We have all this computing power lying around, serving ads! Why not use that in useful fashions? Take for comparison, the kind of work and compute resources that have been poured into physics and sigh. While they restrict their argument to e2e computation, the same techniques could apply to optimizing edge networks and interprovider interconnects, the core, calcuating better multi-path routing, and so on... for which running a model of existing traffic then implementing it and iterating fairly rapidly could be automated... I don't mind being obsoleted... "How should we design network protocols that free subnetworks and links to evolve freely, ensuring that the endpoints will adapt prop- erly no matter what the lower layers do? We believe that the best way to approach this question is to take the design of specific algo- rithmic mechanisms out of the hands of human designers (no matter how sophisticated!), and make the end-to-end algorithm be a func- tion of the desired overall behavior. " ... I'm unfond of how g+ (and now email) splits up a conversation, here's what I said on another thread of discussion: https://plus.google.com/u/0/107942175615993706558/posts/8MRTLpRyAju "My happiness at the paper is keyed on three factors: 1) identifying what a perfect algorithm could be like and the boundaries for it is inspiring as a goal, 2) and being able to study those results in search of inspiration... and 3) (if you didn't notice), just how well sfqcodel did across a wide range of benchmarks. No, it wasn't as good as specialized algos that took weeks of compute time, but it hit a sweet spot in most cases, and it ain't my fault cubic is so aggressive. I thought of the result as "kasperov vs deep blue", with clear win on the chess clock, to kasperov." (kasperov being kathie, van and eric) Now, I've been looking at bittorrent traffic of late - principal preliminary result is that traffic in slow start competes well with it, long duration traffic less so but still gets about double it's "fair share" (still don't understand the result) and under fq_codel to compete like that it drops a LOT of packets. > -- > Eric S. Raymond --=20 Dave T=E4ht Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.= html