From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-x242.google.com (mail-lf0-x242.google.com [IPv6:2a00:1450:4010:c07::242]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id DF7B33CB3B for ; Thu, 22 Dec 2016 22:44:56 -0500 (EST) Received: by mail-lf0-x242.google.com with SMTP id t196so1696070lff.3 for ; Thu, 22 Dec 2016 19:44:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=p8uQ25qfhY9xtpACx9xi9qS8wQCoE8+zLCifI3f0fN4=; b=L4zSde1VRf3FfPoZasPZIkiADbwaiaNcFuGqyx2gOfIcMNALXQmSLS5sWQ+dsLItMs Bn2qiVnmYZvn9F8EcKWY8gu4PI4gycki9Rgm5Bp80XJbLzvl9ezdmTTKRiYSbA3wDpoS hOPYyLHK0fXz1AiqV1gKauflG9wqid9N+Eqe2aVQmOAN3GDX+HxHVIgnbWR+y3m4CAPI NHBTJGNz5YlPjIKdkWvYRzStCBBK2ODqoFj66PrBs+aHAnW/KthJv+51uSxH5geRYcVS Crz8EXMn9T73NE7Vl8brcgy2UpO1NKw1yY0gCMc2kVnJksK0MpLwTlM078biqff8Z7ck N2Xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=p8uQ25qfhY9xtpACx9xi9qS8wQCoE8+zLCifI3f0fN4=; b=NH6aF5eZCQ3oZ19wJ+HC0BC527v6mflUh1zq+2e5MrYSV0N4zKimec1whgwSzzD2CX QQtfOvnaRAFlA+YrtVA325nlfD1xYhpyGcypk4q6xiMgnESzoCtd/BDEFrqL9610WsI9 fdqfKxSQoidM7xGzSimkLX4dR5oW8IHmjhTinpq4+jHt4vHC63gPiOskLFv1G0xfFthT djkygyDbMjam0ryIbES/dZxRXfAxgq0cViLTyxhjLj2iMaWdEaFrv17EJFVgPjElccwe VoU9FdI6wxqVdMSGIJzaVqSlOIaf3gTv34gamAbLpufSCpotg9jUlBz0o9tHSePI2Q8Q F0hQ== X-Gm-Message-State: AIkVDXIp6+vpvzPPHJrNBo8akLe/D5vQhhvCUEIrogKw3zWL8z/2Tkq09xk5Q467G2Ge/A== X-Received: by 10.46.32.7 with SMTP id g7mr5425302ljg.35.1482464695300; Thu, 22 Dec 2016 19:44:55 -0800 (PST) Received: from [192.168.100.13] (37-136-37-228.rev.dnainternet.fi. [37.136.37.228]) by smtp.gmail.com with ESMTPSA id x10sm7467380lja.15.2016.12.22.19.44.53 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 22 Dec 2016 19:44:54 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) From: Jonathan Morton In-Reply-To: <20161222174349.5bd5dffd@xeon-e3> Date: Fri, 23 Dec 2016 05:44:52 +0200 Cc: Sebastian Moeller , cake@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: <9A8ACB3B-8411-414E-B2C3-ABE2C276B351@gmail.com> References: <20161222174349.5bd5dffd@xeon-e3> To: Stephen Hemminger X-Mailer: Apple Mail (2.3124) Subject: Re: [Cake] upstreaming cake in 2017? X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Dec 2016 03:44:57 -0000 > On 23 Dec, 2016, at 03:43, Stephen Hemminger = wrote: >=20 > It would also help to have a description of which use-case cake is = trying to solve: > - how much configuration (lots HTB) or zero (fq_codel) One of Cake=E2=80=99s central goals is that configuration should be = straightforward for non-experts. Some flexibility is sacrificed as a = result, but many common use-cases are covered with very concise = configuration. That is why there are so many keywords. > - AP, CPE, backbone router, host system? The principal use-case is for either end of last-mile links, ie. CPE and = head-end equipment - though actual deployment in the latter is much less = likely than in the former, it remains a goal worth aspiring to. This is = very often a bottleneck link for consumers and businesses alike. Cake could also be used in strategic locations in internal (corporate or = ISP) networks, eg. building-to-building or site-to-site links. For APs, the make-wifi-fast stuff is a better choice, because it adapts = natively to the wifi environment. Cake could gainfully be used on the = wired LAN side of an AP, if inbound wifi traffic can saturate the wired = link. Deployment on core backbone networks is not a goal. For that, you need = hardware-accelerated simple AQM, if anything, simply to keep up. > Also what assumptions about the network are being made? As far as Diffserv is concerned, I explicitly assume that the standard = RFC-defined DSCPs and PHBs are in use, which obviates any concerns about = Diffserv policy boundaries. No other assumption makes sense, other than = that Diffserv should be ignored entirely (which is also RFC-compliant), = or that legacy Precedence codes are in use (which is deprecated but = remains plausible) - and both of these additional cases are also = supported. Cake does *not* assume that DSCPs are trustworthy. It respects them as = given, but employs straightforward countermeasures against misuse (eg. = higher =E2=80=9Cpriority=E2=80=9D applies only up to some fraction of = capacity), and incentives for correct use (eg. latency-sensitive tins = get more aggressive AQM). This improves deployability, and thus solves = one half of the classic chicken-and-egg deployment problem. So, if Cake gets deployed widely, an incentive for applications to = correctly mark their traffic will emerge. Incidentally, the biggest arguments against Precedence are: that it has = no class of *lower* priority than the default (which is useful for swarm = traffic), and that it was intended for use with strict priority, which = only makes sense in a trusted network (which the Internet isn=E2=80=99t). If you have complex or unusual Diffserv needs, you can still use Cake as = leaf qdiscs to a classifier, ignoring its internal Diffserv support. Cake's shaper assumes that the link has consistent throughput. This = assumption tends to break down on wireless links; you have to set the = shaped bandwidth conservatively and still accept some occasional = reversion to device buffering. BQL helps a lot, but implementing it for = certain types of device is very hard. Conversely, Cake=E2=80=99s shaper carefully tries *not* to rely on = downstream devices having large buffers of their own, unlike = token-bucket shapers. Indeed, avoiding this assumption improves latency = performance at a given throughput and vice versa. Cake also assumes in general that the number of flows on the link at any = given instant is not too large - a few hundred is acceptable. Behaviour = should degrade fairly gracefully once flow-hash collisions can no longer = be avoided, and will self-recover to peak performance after anomalous = load spikes. This assumption is however likely to break down on = backbones and major backhaul networks. Cake does support treating = entire IP addresses as single flows, which may extend its applicability. - Jonathan Morton