From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.17.21]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 7671C3B2A4 for ; Sun, 3 Nov 2019 06:14:08 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.net; s=badeba3b8450; t=1572779642; bh=d0kMplIpLp7lmGd4IvaG2GWc+JOIj8xIbRw0g6Q+mqo=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References:To; b=bbTIQrGmYxYgPzuyES4qz/4kZawVVbGjziByEU2Gk+PwHlDH8PUkUE0P4A6ndGz4j LWEo5NtcFQzLIYRwO4V5dRyT/lerpN4yuVLogTqSeWaQAcDv8v/27AUkvPkRpkfyuz hdEIpNqyuNx6dCdtNNeePp/pji/+47WowF5v3EMU= X-UI-Sender-Class: 01bb95c1-4bf8-414a-932a-4f6e2808ef9c Received: from [192.168.42.222] ([77.0.139.148]) by mail.gmx.com (mrgmx104 [212.227.17.168]) with ESMTPSA (Nemesis) id 1MF3DM-1iBmPi1qcy-00FTyl; Sun, 03 Nov 2019 12:14:02 +0100 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 12.4 \(3445.104.11\)) From: Sebastian Moeller In-Reply-To: <20191102233833.79936406074@ip-64-139-1-69.sjc.megapath.net> Date: Sun, 3 Nov 2019 12:14:00 +0100 Cc: bloat@lists.bufferbloat.net Content-Transfer-Encoding: quoted-printable Message-Id: References: <20191102233833.79936406074@ip-64-139-1-69.sjc.megapath.net> To: Hal Murray X-Mailer: Apple Mail (2.3445.104.11) X-Provags-ID: V03:K1:J9ujgcUfbd//albLuDAcmvtYKYcliQKvM9aEslQ5xJtlrbQMYRe nwvE6qFTDqDIP5CJ8wTDRyuNABbjG+XkDI35rAMetf+fgrxObkeFE/cakv38JLYFVsnG6fY 0W+OAaNyMzvWRNmj1ztTDP6jVOwSwlxlVUCNFTBbVodiZdeuKoERy/wbYZ8cgJ8D3f/nbHx v15t3F6IhbIsg0knZ/1rw== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:Z7zkIGA7qeE=:6HfuDTe/cvDilPLZTHR3Jj GmOpt2QPUNWWvZnV7aYtbyyqp7LUp9Gf8cHAfwaiglca7bn07dWN56egKhvEn4EqkHW369ed5 TRSjSaTAho7+72rI+I24kU/QZb+XyFuO7EOUUqsAVHAUND4JoRclTIAViwoLmTtczezWV+YrW +86iPhbdOOtnfs3ovFbsSu0P9DSvDY+pVD2QdCbHbpwlvBwh4AHnqXYlkewbKVPPoSkFlmf0w uHrbQmFPOCGQtGFV3Wb6MBsa5fPHPcGrHfg81TfgRYyl7Ds73wcRn09z3U9SmEMoaKOmkKJ16 Y7a9MpLiaK05n45J76LdtT1vqULDpoWifVrvLT83HeHMx2dYrGqPvOCImlTno3Fl/dXDS4Nca yokTJZPYke7UImck6fR5v4aiH/eRYzd6PSHXGe4m/12YRxD9fFK11urNlNnyJoHgq0cRMc1et iAdZ9D7PZEMZec4zQ33DrrWcvsqCbKdq2AsN0LEXDsvI198wHCqGDGEcDhcUX79OW+KJjsXRq zxkTpbyM8eiXgTGV/OVNlWB/2bBegf8i9N977F/iJNRw2178S74ndUTXAc74xzQO760xyI0wG BvMHVXzielhtUlXW/Eqihz0FbdQyqUXxHupqQx8Tq5F693y4Snu+6eIouWg9MRdfiHfTZ+xAF 1Lu2MZxCK5vop892SaQbl8rCg/Rr8kg0ZdQLAZxcEesXIgyHAooac4IMLJIIuOA3LcM2ZVDE0 O1jP7hq79q3bK7+BeCY41G86eQhSk+yXhUDrApC6IN4pAKwdEK2mLGKbjoQU4JHWvUIq2qw74 /gbhFbmM6GFMO7U7rx0b+fTPZOfto46eOlZ+li0EbM2DUzGpGUaVOR8WZi/goRJ4kBaDSLxO5 gT5grbSDdBsyx202ZDU0D/m1LltIeTRCDcdQInZXCIUBjuPRQklebEaGiP+SR3y8Jmr56pCHR ITgrT9zLkdwoP4t7TpmOaslSa5EuwNkFVxw6oQvVbXbWlRFzI9SJjxI+7J0S03dou+0MHd5Ji WZUm7AeAn4xtKp4PEcZDF10J8kds6RsxBUbXoBEZKUBz9qg9iNvyVUBjo+tdnqnQDSUZPOLQ7 GqyYa+Vid50ggOdb3tpWytZUonBGaN9Zs1rSePpMif4DV+Q9D5vJu/ED2FdXhGIn+cuAhaS2/ jFEWRHSsfa+jWp5j2iDHPg3JJyvcf90Sp5svQnztGUUjkYB+iF6R6fDNpGPrM77k7hZ3Mi5VA ZSYGwdz1AdTyihubHwj2quZVcFi+mIc7nhllzVVjdYeiaJiLJirhpejxVE4Y= Subject: Re: [Bloat] bbr on slashdot X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 03 Nov 2019 11:14:08 -0000 > On Nov 3, 2019, at 00:38, Hal Murray wrote: >=20 >=20 > Sebastian Moeller said: >> Interestingly, the naive expectation in the vice text is equal = sharing >> between all concurrent flows, if only we had a system that could = actually >> help achieving this kind of set-up that is fair to each flow...=20 >=20 > Is there consensus on what a flow is? =20 I believe yes, a "flow" is the set of packets that all share the = same [5|3]-tuple of source and destination address as well as the = protocol and if they exist source and destination port numbers. But as you point out that is not necessarily to only useful flow = definition for the purpose of link-sharing. > Or what the unit of traffic that=20 > fairness measures should be? I would say the consensus is that the status quo, "no fairness = guarantee of any kind (as long as nobody gets starved)" is decidedly not = what users expect. In a sense any better definition will be an = improvement. >=20 > It seems to me that it depends on where you are located. Sure, except the current anything goes is also not optimal for = any party (well those that push large aggregate traffic volumes are fine = with the status quo I would guess, as almost everything else will = require more resources). >=20 > Consider upstream traffic: >=20 > If I'm a workstation or server, I probably want to give equal weight = to each=20 > connection. Yes, but then just do equal weight scheduling on your egress, = no? >=20 > If I'm an exit router at a residence, I probably want to give equal = weight to=20 > each IP Address. But which one the src or dest one? > If not, pigs can game the system by making multiple=20 > connections. Which is, as I might add not worse than what we have right now, = multiple connections already give an improvement in the current system = (witness those download managers that use multiple parallel TCP = connections). And sending more packets into the network will cause = congestion that affects everybody else already, 5-tuple fairness = actually increases the difficulty of gaming the system (not by much, but = the attacker now needs to put some randomness into the packet headers). = Using less header information will make the attackers work more and more = challenging, even though attacks will always be possible, but IMHO the = valie od per-flow-fairness is not increased robustness against attacks, = but rather a better predictable performance under normal conditions. As a "transit" provider, I would probably look only at the IP = header, which means either source, destination, or source and = destination addresses or prefixes for IPv6 as flow defining entities = (in that case just randomizing port numbers will not gain much for an = attacker, now IP addresses need to be randomized). > But if I have a server, maybe I want to reserve or limit the=20 > bandwidth it gets - reserve to keep the workstation/laptop traffic = from=20 > killing the server and limit so the workstation/laptop people can get = some=20 > work done when the server is busy. And nothing in a default per-flow fairness world will prohibit = you doing this, just as it is possible in the current anything goes = world, no? >=20 > If I'm an ISP customer facing router, I probably want to give equal = weight to=20 > each customer, probably scaled by how much bandwidth they are paying = for. >=20 > I don't know how to handle backbone routers. You probably want to = treat each=20 > customer as a flow, again scaled by how much bandwidth they are paying = for. =20 In theory perhaps, in practice that scaling either requires to = tag each packet with the bandwidth allotment or a costly lookup. As far = as I can tell the current solution is to restrict enduser rates at the = first viable choke point, the internet access link and simply not care = in the back-bone or at peerings. IMHO using any flw definition at = back-bone or peering links will be a big improvement over the status = quo, so I believe the actual choice is not that inmportant as long as = one is chosen. > But an IP level packet doesn't tell you anything about which customer = it came=20 > from. Well yes and no, unless spoofed, the source IP-address pretty = much tells you the source "customer", in IPv6 its is going to be the = prefix for most end users on IPv4 it will be the full /32. IMHO the goal = should not not to find the optimal solution (which heavily depends on = the optimization criterium), the goal should be to improve upon the = status quo, and make network performance easier to predict. >=20 > If this is old news, please point me at a good writeup. I have no write-up of this available, but none of my points = above are original in any way, so I am sure must be write-ups somewhere. Best Regards Sebastian >=20 >=20 >=20 > --=20 > These are my opinions. I hate spam. >=20 >=20 >=20 > _______________________________________________ > Bloat mailing list > Bloat@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/bloat