From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-x22a.google.com (mail-wr0-x22a.google.com [IPv6:2a00:1450:400c:c0c::22a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 293683B2A3 for ; Fri, 7 Apr 2017 07:42:55 -0400 (EDT) Received: by mail-wr0-x22a.google.com with SMTP id g19so59817065wrb.0 for ; Fri, 07 Apr 2017 04:42:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc:message-id:references :to; bh=myqH1K05yVUB3Pull3TmDrrDJGvTeDmf3x7EzidsD7Q=; b=T4KoKbM1vd8VhFLc5J/qB6rO/n4Nhho34uNo3nThdq3FLIMXj2iZGwtFkN0rO0YeAG ROKGVPqNkP3KpJvYdRd37lNmrYQUtzW5dZOMdNE9EalsMlxSU4v54xDv7DK6Wp827jKj /JWhAXnJMyqaiVbQwIoLw11X4nkhfQp7WRkp561W34xZ2NOFpi0MAd1d39z0G8eM2CUb yxqQf3e3OnbGKZgxr78lUu41pERO4XlFuOtt/+aLWRaVGzU7MkXMcgoBxC+rSnj35vgc dJpqnHql0gl1gPj3zLGMoK76B7Kmu6EqxRL6slc7sOQQTj4K2UGM1yJ9NrVKFN+jTxXU 7lcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=myqH1K05yVUB3Pull3TmDrrDJGvTeDmf3x7EzidsD7Q=; b=FCUS1+14MFJGFEocqRSsg6n2IS080UGpiyvDKIKB+ynPb6glNsqi4yG9X6ujC78mww bI0ACZ0gXceVbefo80zTzoBcPjddNLGVJzarij0V84VGUGnOupY+sH8wd6bDQEvTvNZd kunvmhtS8lCfiaZe1VmqWepshBT+JY3WLBvRH2Ez+wweFpHwMUf0tWmjr6FeLyfmPOyx fqwXew6AEVWr6vouBdw4HVTYs/d/en5DPaCpTP0bAYCK1EK4iCALriCNAyigZ1tv3JXn Li+ja+XKrw9X1BircS3lve+DtR++6NFoGWVadahkEzPw5VM754Y0+ocLapl6J9Q8zGW9 hP0Q== X-Gm-Message-State: AFeK/H161kaXxrUgbLE5wDVQsrR43ONpQnSLxlIG+MYENKxvjxxIhRL3Jdcp8ZsAs6n2nQ== X-Received: by 10.223.138.199 with SMTP id z7mr7713708wrz.66.1491565374174; Fri, 07 Apr 2017 04:42:54 -0700 (PDT) Received: from [10.72.0.130] (h-1169.lbcfree.net. [185.99.119.68]) by smtp.gmail.com with ESMTPSA id l41sm5703475wrl.59.2017.04.07.04.42.53 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 07 Apr 2017 04:42:53 -0700 (PDT) Content-Type: multipart/alternative; boundary="Apple-Mail=_86362A6D-8CA4-4FF6-A23A-F394814AD84C" Mime-Version: 1.0 (Mac OS X Mail 9.3 \(3124\)) From: Pete Heist In-Reply-To: <193C2284-31C2-4FDC-AA2D-DD19BE7E9E22@gmx.de> Date: Fri, 7 Apr 2017 13:42:52 +0200 Cc: Jonathan Morton , Cake List Message-Id: References: <2FD59D30-3102-4A3E-A38E-050E438DABF0@gmail.com> <6F118C46-16DB-48AC-A90D-7E6D44B6D069@gmail.com> <1E4563E2-63E2-419D-AFDD-8CD74F22539B@gmail.com> <193C2284-31C2-4FDC-AA2D-DD19BE7E9E22@gmx.de> To: Sebastian Moeller X-Mailer: Apple Mail (2.3124) Subject: Re: [Cake] flow isolation for ISPs X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 07 Apr 2017 11:42:55 -0000 --Apple-Mail=_86362A6D-8CA4-4FF6-A23A-F394814AD84C Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 > On Apr 7, 2017, at 1:13 PM, Sebastian Moeller wrote: >=20 > Hi Peter, >=20 >> On Apr 7, 2017, at 11:37, Pete Heist > wrote: >>=20 >> Ok, I=E2=80=99m still getting familiar with how triple-isolate is = implemented. For example, I was surprised in my test setup that no = fairness is enforced when four client IPs connect to a single server IP, = but I understand from this discussion = (https://github.com/dtaht/sch_cake/issues/46) that that is actually what = is expected. We would probably use dual-srchost and dual-dsthost in the = backhaul, which seems to work very well, and in the backhaul we have the = information to specify that in both directions. (Also, there is no NAT = to deal with at this level.) I didn=E2=80=99t write that very well before, so just to clarify, = there=E2=80=99s nothing more we need to specify for dual-srchost and = dual-dsthost to work, just that we control both directions of the flow = so can use dual-srchost on upstream egress and dual-dsthost on = downstream egress. >> Just to see if I understand the marking proposal, here's the behavior = I would expect: if there are two TCP flows (on egress) with mark 1 and = one with mark 2, that together saturate the link, the measured rate of = the two flows with mark 1 will add up to the rate of the single flow = with mark 2. Is that right? And would you still add a keyword to specify = that the mark should be used at all? >>=20 >> I=E2=80=99m not sure where the 1024 limit comes from, but it would = probably be fine in our case as of now, with 800 members. Even in the = future, I don=E2=80=99t think occasional collisions would be a big = problem, and I think there are things we could do to minimize them. >=20 > Seeing your 800 members I remember a discussion over at the lede = forum, = https://forum.lede-project.org/t/lede-as-a-dedicated-qos-bufferbloat-appli= ance/1861/27?u=3Dmoeller0 = where orangetek, used cake on a wired = backhaul for approximately 600 end users. He reported for number of = concurrent flows: =E2=80=9CAs far as i can tell, around 25k-30k during = busy hours.=E2=80=9D=20 > He also increased the number of CAKE_BINs in the code to 64k. So = depending on your user=E2=80=99s 1024 might be a bit tight, given that = you still ideally want flows to not share bins if possible (sure cake is = great in avoiding sharing unless impossible, but with enough flows you = might want/need to simply hard code your cake instances for higher = limits). I see, so the 1024 limit probably comes from the CAKE_QUEUES define. :) So far, we=E2=80=99re not looking to use Cake on the main Internet = router. I=E2=80=99m just not sure yet if it would be appropriate for a = gigabit uplink (that also doesn=E2=80=99t reach saturation, as far as = can be discerned from mrtg plots). We=E2=80=99re taking things step by = step, and looking at some of the backhaul routers first, where there can = sometimes be congestion. As for an average peak number of concurrent flows on the Internet router = I could find out from the admin. Total throughput for the gigabit = Internet uplink is on a public page: = https://www.lbcfree.net/mrtg/10.101.254.194_24.html, but not flow = counts. Thanks for the tip...= --Apple-Mail=_86362A6D-8CA4-4FF6-A23A-F394814AD84C Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8
On Apr 7, 2017, at 1:13 PM, Sebastian Moeller <moeller0@gmx.de> = wrote:

Hi Peter,

On Apr 7, = 2017, at 11:37, Pete Heist <peteheist@gmail.com> wrote:

Ok, I=E2=80=99m still getting familiar with how = triple-isolate is implemented. For example, I was surprised in my test = setup that no fairness is enforced when four client IPs connect to a = single server IP, but I understand from this discussion (https://github.com/dtaht/sch_cake/issues/46) that that is = actually what is expected. We would probably use dual-srchost and = dual-dsthost in the backhaul, which seems to work very well, and in the = backhaul we have the information to specify that in both directions. = (Also, there is no NAT to deal with at this level.)

I didn=E2=80=99t write that very well before, so = just to clarify, there=E2=80=99s nothing more we need to specify for = dual-srchost and dual-dsthost to work, just that we control both = directions of the flow so can use dual-srchost on upstream egress and = dual-dsthost on downstream egress.

Just to see if I understand = the marking proposal, here's the behavior I would expect: if there are = two TCP flows (on egress) with mark 1 and one with mark 2, that together = saturate the link, the measured rate of the two flows with mark 1 will = add up to the rate of the single flow with mark 2. Is that right? And = would you still add a keyword to specify that the mark should be used at = all?

I=E2=80=99m not sure where the 1024 = limit comes from, but it would probably be fine in our case as of now, = with 800 members. Even in the future, I don=E2=80=99t think occasional = collisions would be a big problem, and I think there are things we could = do to minimize them.

Seeing your 800 members I remember a = discussion over at the lede forum, https://forum.lede-project.org/t/lede-as-a-dedicated-qos-buffer= bloat-appliance/1861/27?u=3Dmoeller0 where orangetek, used cake = on a wired backhaul for approximately 600 end users. He reported for = number of concurrent flows: =E2=80=9CAs far as i can tell, around = 25k-30k during busy hours.=E2=80=9D 
He also increased = the number of CAKE_BINs in the code to 64k. So depending on your = user=E2=80=99s 1024 might be a bit tight, given that you still ideally = want flows to not share bins if possible (sure cake is great in avoiding = sharing unless impossible, but with enough flows you might want/need to = simply hard code your cake instances for higher limits).

I see, so = the 1024 limit probably comes from the CAKE_QUEUES define. = :)

So far, we=E2=80=99re not looking = to use Cake on the main Internet router. I=E2=80=99m just not sure yet = if it would be appropriate for a gigabit uplink (that also doesn=E2=80=99t= reach saturation, as far as can be discerned from mrtg plots). We=E2=80=99= re taking things step by step, and looking at some of the backhaul = routers first, where there can sometimes be congestion.

As for an average peak number of concurrent flows = on the Internet router I could find out from the admin. Total throughput = for the gigabit Internet uplink is on a public page: https://www.lbcfree.net/mrtg/10.101.254.194_24.html, but = not flow counts.

Thanks for the = tip...
= --Apple-Mail=_86362A6D-8CA4-4FF6-A23A-F394814AD84C--