From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot0-x234.google.com (mail-ot0-x234.google.com [IPv6:2607:f8b0:4003:c0f::234]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 85D723B29E for ; Wed, 25 Jan 2017 16:03:15 -0500 (EST) Received: by mail-ot0-x234.google.com with SMTP id 32so2803925oth.3 for ; Wed, 25 Jan 2017 13:03:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=mKhlGlf2yyMzF9sCowZLuVLiEhM26V/pZ1fV2HT26l8=; b=bscedNV/gPSjEjFq2clEjjs7+uPRQmNqbigpHuv0Lk5NeW5VzH5LPWUeGZtcIM6pK7 aEnrT8N8h8346ZUL+U/6YRDrKzc4J/KF0Sr0j1K3wHaVb98mGzUqrUfMW7akpdxamA4o YURkLSS6JHVruEHrjIsPGT+Q9qOuIn8I98RHRKTUifh6kI+M857LHFL/0i96Rqffhy3q eahmsNIdg4TRLjBNW4j5frNvGpEFKDa6J6u5Q3ezHDz+HD2xEd+iJzjlhCBKvYd8hFct szEeVFkKDJgMy9FDD8jCxnTvM+6Qe4yJe8zIdEAGNfJvJTRwCw9Oazb2hxGhCceSPDj6 RR/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=mKhlGlf2yyMzF9sCowZLuVLiEhM26V/pZ1fV2HT26l8=; b=IwKbWnGCj6dohbtQ55hlAW2K1Y+MT7/pG5lbhPuVG2tWk+d6ai+MvrKO/k4aRMu/sk I726+vEUi1bgsr8mitDSZA0M0hQuFshIdYWYQBsd9UqPS/8mDP82B4HKZkvdicxx2QGM ub9mjVDHGqcVpLizGpQkemzZgz63hP0MJubaN+VmsxxTKKKDxdX2mj+49sMoMr79Mv8+ ByerJJSTo+HnZhrCsti7eq+Vpf2NpvNHmzZ+YL5XtKAyc8eSrgRHg1ZswXy66SzTE8w+ 5+aH3jf6vvaZOkM5MsHmOhypQhxeGQhQrEGipEETUaZ4oWI43OV3begJB506uMZzUegD NMDA== X-Gm-Message-State: AIkVDXKjk3d3YPDUnnCTqfFeBW2XXGagP/NJM4ldtYsvTGOgVGsVMUtEstDQ3PAhZxnCSd5CPUc6SjMsDGseOQ== X-Received: by 10.157.59.246 with SMTP id k109mr22328847otc.89.1485378194878; Wed, 25 Jan 2017 13:03:14 -0800 (PST) MIME-Version: 1.0 Received: by 10.157.1.21 with HTTP; Wed, 25 Jan 2017 13:03:14 -0800 (PST) In-Reply-To: References: From: Hans-Kristian Bakke Date: Wed, 25 Jan 2017 22:03:14 +0100 Message-ID: To: bloat@lists.bufferbloat.net Content-Type: multipart/alternative; boundary=001a11493cbca9bbc10546f18f10 Subject: Re: [Bloat] Initial tests with BBR in kernel 4.9 X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Jan 2017 21:03:15 -0000 --001a11493cbca9bbc10546f18f10 Content-Type: text/plain; charset=UTF-8 I did some more testing with fq as a replacement of pfifo_fast and it now behaves just as good. It must have been some strange artifact. My questions are still standing however. Regards, Hans-Kristian On 25 January 2017 at 21:54, Hans-Kristian Bakke wrote: > Hi > > Kernel 4.9 finally landed in Debian testing so I could finally test BBR in > a real life environment that I have struggled with getting any kind of > performance out of. > > The challenge at hand is UDP based OpenVPN through europe at around 35 ms > rtt to my VPN-provider with plenty of available bandwith available in both > ends and everything completely unknown in between. After tuning the > UDP-buffers up to make room for my 500 mbit/s symmetrical bandwith at 35 ms > the download part seemed to work nicely at an unreliable 150 to 300 mbit/s, > while the upload was stuck at 30 to 60 mbit/s. > > Just by activating BBR the bandwith instantly shot up to around 150 mbit/s > using a fat tcp test to a public iperf3 server located near my VPN exit > point in the Netherlands. Replace BBR with qubic again and the performance > is once again all over the place ranging from very bad to bad, but never > better than 1/3 of BBRs "steady state". In other words "instant WIN!" > > However, seeing the requirement of fq and pacing for BBR and noticing that > I am running pfifo_fast within a VM with virtio NIC on a Proxmox VE host > with fq_codel on all physical interfaces, I was surprised to see that it > worked so well. > I then replaced pfifo_fast with fq and the performance went right down to > only 1-4 mbit/s from around 150 mbit/s. Removing the fq again regained the > performance at once. > > I have got some questions to you guys that know a lot more than me about > these things: > 1. Do fq (and fq_codel) even work reliably in a VM? What is the best > choice for default qdisc to use in a VM in general? > 2. Why do BBR immediately "fix" all my issues with upload through that > "unreliable" big BDP link with pfifo_fast when fq pacing is a requirement? > 3. Could fq_codel on the physical host be the reason that it still works? > 4. Do BBR _only_ work with fq pacing or could fq_codel be used as a > replacement? > 5. Is BBR perhaps modified to do the right thing without having to change > the qdisc in the current kernel 4.9? > > Sorry for long post, but this is an interesting topic! > > Regards, > Hans-Kristian Bakke > --001a11493cbca9bbc10546f18f10 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I did some more testing with fq as a replacement of pfifo_fast = and it now behaves just as good. It must have been some strange artifact. M= y questions are still standing however.

Regards,=C2=A0
Hans-Kristian

On 25 J= anuary 2017 at 21:54, Hans-Kristian Bakke <hkbakke@gmail.com> wrote:
Hi

Kernel 4.9= finally landed in Debian testing so I could finally test BBR in a real lif= e environment that I have struggled with getting any kind of performance ou= t of.

The challenge at hand is UDP based OpenVPN through europe at aro= und 35 ms rtt to my VPN-provider with plenty of available bandwith availabl= e in both ends and everything completely unknown in between. After tuning t= he UDP-buffers up to make room for my 500 mbit/s symmetrical bandwith at 35= ms the download part seemed to work nicely at an unreliable 150 to 300 mbi= t/s, while the upload was stuck at 30 to 60 mbit/s.=C2=A0

Just by ac= tivating BBR the bandwith instantly shot up to around 150 mbit/s using a fa= t tcp test to a public iperf3 server located near my VPN exit point in the = Netherlands. Replace BBR with qubic again and the performance is once again= all over the place ranging from very bad to bad, but never better than 1/3= of BBRs "steady state". In other words "instant WIN!"<= /div>
=
However, seeing the requirement of fq and pacing for BBR and noticing = that I am running pfifo_fast within a VM with virtio NIC on a Proxmox VE ho= st with fq_codel on all physical interfaces, I was surprised to see that it= worked so well.
I then replaced pfifo_fast with fq and the performance wen= t right down to only 1-4 mbit/s from around 150 mbit/s. Removing the fq aga= in regained the performance at once.

I have got some questions to you = guys that know a lot more than me about these things:
1. Do fq (and fq_code= l) even work reliably in a VM? What is the best choice for default qdisc to= use in a VM in general?
2. Why do BBR immediately "fix" all my i= ssues with upload through that "unreliable" big BDP link with pfi= fo_fast when fq pacing is a requirement?
3. Could fq_codel on the physical = host be the reason that it still works?
4. Do BBR _only_ work with fq pacin= g or could fq_codel be used as a replacement?
5. Is BBR perhaps modified to= do the right thing without having to change the qdisc in the current kerne= l 4.9?

Sorry for long post, but this is an interesting topic!

Reg= ards,
Hans-Kristian Bakke

--001a11493cbca9bbc10546f18f10--