From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot0-x22c.google.com (mail-ot0-x22c.google.com [IPv6:2607:f8b0:4003:c0f::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 1A36D3B29E for ; Wed, 25 Jan 2017 15:55:00 -0500 (EST) Received: by mail-ot0-x22c.google.com with SMTP id 32so2640660oth.3 for ; Wed, 25 Jan 2017 12:54:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=7/Sp/FHb5bFjJzhwoJR0yTurLEM7GDodFIBC6hteGxo=; b=QDybu8iPp4ssAeipQ3OdG2CgGoU5lHDZM/mrdwGAu22V0a53Z1ps10IlCjQpeCQpfs 7b95R7GZVyfhqpNxgzjsBm0kuM7y0uQwHLpuI5/PBgUdj0BYz7bPAJafqwGD6pCKL3r7 pn4QqaBI/lZwW4YvjV5TYFcMPmv5EmUbCrQx/p3SSVX7c0vulkSZTdOgUG7C7fkvnSjr KSawW6NQkD+mziyp1TcQ32zCpd71IVch+otiCgATub2y1JI57BoyFm9qMMcYufcGiprn jR6FOaFM7U3dIJ8PV+SrTgGKBPv/jzsSczbbUF3Qs3jsqk32lNRGbkOqHSI6j2Wuzoaw bXdw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=7/Sp/FHb5bFjJzhwoJR0yTurLEM7GDodFIBC6hteGxo=; b=ncYeoSJh6sOwCoNzTCa7Z/MGEM4VUKnkk6P1bWrLVZ+2SKV1C/8dM0OqVvi3kQHbcY hPqj7L70EjXptbCVk3XvYoLQi5yg6DfJHvi/2f5UKBu5Nb3BYBnASEAN1ZAuGJrkoYgm OflIUwDhiRzzFaV2k2AJvNmhMahJjWolyPPPk+iiWmwk/Ox/mDqQh7VYgj3F4gxP7Rx8 02K8mfIWgrmT+dC4ps+xrFi12QA7GFlZwCOu/iNFBredsQxEEs6n7fDZPd75vcw8Dy06 T0P8+U18UvRZynYck9D0IOH0MRQj+Jxz0I9Qml9N4UyBKktyl9JtjIbATiIEYQDN2vJL kmyQ== X-Gm-Message-State: AIkVDXKASvknj/xJPnpo52fSC52kzG8InAw3bi63d5OYNEPq5iRhtOmwdKEUiv/X8HuW56zEXQaQ0DvRQRZmPw== X-Received: by 10.157.27.198 with SMTP id v6mr20159979otv.218.1485377699343; Wed, 25 Jan 2017 12:54:59 -0800 (PST) MIME-Version: 1.0 Received: by 10.157.1.21 with HTTP; Wed, 25 Jan 2017 12:54:59 -0800 (PST) From: Hans-Kristian Bakke Date: Wed, 25 Jan 2017 21:54:59 +0100 Message-ID: To: bloat@lists.bufferbloat.net Content-Type: multipart/alternative; boundary=001a113d17ca2075620546f1720c Subject: [Bloat] Initial tests with BBR in kernel 4.9 X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Jan 2017 20:55:00 -0000 --001a113d17ca2075620546f1720c Content-Type: text/plain; charset=UTF-8 Hi Kernel 4.9 finally landed in Debian testing so I could finally test BBR in a real life environment that I have struggled with getting any kind of performance out of. The challenge at hand is UDP based OpenVPN through europe at around 35 ms rtt to my VPN-provider with plenty of available bandwith available in both ends and everything completely unknown in between. After tuning the UDP-buffers up to make room for my 500 mbit/s symmetrical bandwith at 35 ms the download part seemed to work nicely at an unreliable 150 to 300 mbit/s, while the upload was stuck at 30 to 60 mbit/s. Just by activating BBR the bandwith instantly shot up to around 150 mbit/s using a fat tcp test to a public iperf3 server located near my VPN exit point in the Netherlands. Replace BBR with qubic again and the performance is once again all over the place ranging from very bad to bad, but never better than 1/3 of BBRs "steady state". In other words "instant WIN!" However, seeing the requirement of fq and pacing for BBR and noticing that I am running pfifo_fast within a VM with virtio NIC on a Proxmox VE host with fq_codel on all physical interfaces, I was surprised to see that it worked so well. I then replaced pfifo_fast with fq and the performance went right down to only 1-4 mbit/s from around 150 mbit/s. Removing the fq again regained the performance at once. I have got some questions to you guys that know a lot more than me about these things: 1. Do fq (and fq_codel) even work reliably in a VM? What is the best choice for default qdisc to use in a VM in general? 2. Why do BBR immediately "fix" all my issues with upload through that "unreliable" big BDP link with pfifo_fast when fq pacing is a requirement? 3. Could fq_codel on the physical host be the reason that it still works? 4. Do BBR _only_ work with fq pacing or could fq_codel be used as a replacement? 5. Is BBR perhaps modified to do the right thing without having to change the qdisc in the current kernel 4.9? Sorry for long post, but this is an interesting topic! Regards, Hans-Kristian Bakke --001a113d17ca2075620546f1720c Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi

Kernel 4.9 finally landed in Debian testing so I could = finally test BBR in a real life environment that I have struggled with gett= ing any kind of performance out of.

The challenge at hand is UDP based= OpenVPN through europe at around 35 ms rtt to my VPN-provider with plenty = of available bandwith available in both ends and everything completely unkn= own in between. After tuning the UDP-buffers up to make room for my 500 mbi= t/s symmetrical bandwith at 35 ms the download part seemed to work nicely a= t an unreliable 150 to 300 mbit/s, while the upload was stuck at 30 to 60 m= bit/s.=C2=A0

Just by activating BBR the bandwith instantly shot up to = around 150 mbit/s using a fat tcp test to a public iperf3 server located ne= ar my VPN exit point in the Netherlands. Replace BBR with qubic again and t= he performance is once again all over the place ranging from very bad to ba= d, but never better than 1/3 of BBRs "steady state". In other wor= ds "instant WIN!"

However, seeing the requirement of fq and = pacing for BBR and noticing that I am running pfifo_fast within a VM with v= irtio NIC on a Proxmox VE host with fq_codel on all physical interfaces, I = was surprised to see that it worked so well.
I then replaced pfifo_fast wit= h fq and the performance went right down to only 1-4 mbit/s from around 150= mbit/s. Removing the fq again regained the performance at once.

=
I hav= e got some questions to you guys that know a lot more than me about these t= hings:
1. Do fq (and fq_codel) even work reliably in a VM? What is the best= choice for default qdisc to use in a VM in general?
2. Why do BBR immediat= ely "fix" all my issues with upload through that "unreliable= " big BDP link with pfifo_fast when fq pacing is a requirement?
<= div class=3D"gmail_default" style=3D"font-family:verdana,sans-serif">3. Cou= ld fq_codel on the physical host be the reason that it still works?
4. Do B= BR _only_ work with fq pacing or could fq_codel be used as a replacement?
5= . Is BBR perhaps modified to do the right thing without having to change th= e qdisc in the current kernel 4.9?

Sorry for long post, but this is a= n interesting topic!

Regards,
Hans-Kristian Bakke
--001a113d17ca2075620546f1720c--