From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-x232.google.com (mail-oi0-x232.google.com [IPv6:2607:f8b0:4003:c06::232]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 2CB433B25E; Wed, 4 May 2016 04:02:15 -0400 (EDT) Received: by mail-oi0-x232.google.com with SMTP id v145so56105196oie.0; Wed, 04 May 2016 01:02:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=MGFp/64BvdBQkRnru/KqcjB4QmUkK8Q5j5GYMHsxupY=; b=OMgowtgvzHepo4EXEKgD/14lx7mpup13X3u/NNglBMisFy2M5HSFitXC6RhQjoq4qt 72/yRJTIKa/JwkxpK7ITqva4RwhDLqzQCOs+jhHzZvpmA/DX6DS7PfWz631Ungw+p5T7 LUZ9jJZWeOzyRfwIDI/1VdkjjrkA0BEjinSDhMTX6rgVpJIhB++gFZ7ZrUGrtTZx2GRp xydaQKRR1PtnXtR8QkTZfTG4JbhQE5fYDfxxenyFh2/JIHhXkh/F/Pnyu9Pkstu3F/X+ k59nuQwbc6MllU4tuzAnJaVSqfYpPP8vHtUUx/KGP+AfShOLmw+T3AypWeoMGpzWdrgF v4DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=MGFp/64BvdBQkRnru/KqcjB4QmUkK8Q5j5GYMHsxupY=; b=h0ADoBXRPIxlf4gHFMG69Wcd5Q0qvmUaMz5Eegey5afR14QsEbE77KY+9pbJ2zxk6E QRzFkxa/TKeFfFh98bXutFSkAlCQhHctyGm/nlUHtwnUJDzcDdkN0aKj++v7dSZaTqfF PJVFM+ZjEAk2mP7tG3vxF+QgZxuLeCkdxj+ivnKZk/1NoE7fRukyfKD8Ct5tCvf0HAda ESTp+s6V0Sm32a3YTQF+zy0MJigCRTLYNQYLoB48iya6CwKR1ywXu0nszj3lBP+o6vtx ljbbEIWodLbowiDZXUyg8ScdKuTHCnHgckL8MsQ1Uv3n9uHDjIAjqn2kysjM1qT4OIgb fGCw== X-Gm-Message-State: AOPr4FXOkn3fMLtjlm+bgZJgXW1nk7V8MOCLpn4ekNb1Q0dLhKacj1Ulc0YQKyK7xU2R3sDBIbwuNkVHCMplLQ== MIME-Version: 1.0 X-Received: by 10.157.7.86 with SMTP id 80mr3086558ote.168.1462348934625; Wed, 04 May 2016 01:02:14 -0700 (PDT) Received: by 10.202.79.195 with HTTP; Wed, 4 May 2016 01:02:14 -0700 (PDT) In-Reply-To: References: Date: Wed, 4 May 2016 11:02:14 +0300 Message-ID: From: Roman Yeryomin To: Dave Taht Cc: make-wifi-fast@lists.bufferbloat.net, "codel@lists.bufferbloat.net" , ath10k Content-Type: text/plain; charset=UTF-8 Subject: Re: [Codel] iperf3 udp flood behavior at higher rates X-BeenThere: codel@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: CoDel AQM discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 May 2016 08:02:15 -0000 On 3 May 2016 at 02:18, Dave Taht wrote: > to fork the fq_codel_drop discussion a bit... > > I have up and running two new boxes[1] that are my hope to be able to > test ath10k/ath9k hardware with, for this test, using one in the > middle as a router and a nuc i3 box as the server, all ports pure > ethernet... there's a switch in the way, too. > > On tcp via netperf I get expected ~940 mbits. > > On udp via iperf3 (again, all pure ethernet) - in neither case below > am I seeing any drops in the qdisc itself anywhere on the path, yet am > only achieving 500mbit. That's interesting, I have no problems with UDP over ethernet. What about TCP with iperf3? > ? > > 1) Using the > > iperf3 -c 172.26.16.130 -u -b900M -R -l1472 -t600 > > udp flood version, I get some loss on the initial burst, but none > *reported* after that, and peak at about ~500Mbits. > > [ ID] Interval Transfer Bandwidth Jitter > Lost/Total Datagrams > [ 4] 0.00-1.00 sec 52.1 MBytes 437 Mbits/sec 0.037 ms > 1276/38379 (3.3%) > [ 4] 1.00-2.00 sec 54.3 MBytes 456 Mbits/sec 0.042 ms 0/38699 (0%) > [ 4] 2.00-3.00 sec 56.1 MBytes 470 Mbits/sec 0.030 ms 0/39933 (0%) > > 2) Flipping the sense of the test by getting rid of -R (from the nuc) > > iperf3 -c 172.26.16.130 -u -b900M -l1472 -t600 > > I get on the other side a steady state throughput of a little over > 520mbits (with 41% loss reported consistently) > > [ 5] 37.00-38.00 sec 64.2 MBytes 539 Mbits/sec 0.026 ms > 31613/77355 (41%) > [ 5] 38.00-39.00 sec 62.8 MBytes 527 Mbits/sec 0.023 ms > 31517/76255 (41%) > [ 5] 39.00-40.00 sec 62.0 MBytes 520 Mbits/sec 0.033 ms > 31052/75201 (41%) > > On the other: > > [ 4] 77.00-78.00 sec 111 MBytes 929 Mbits/sec 78915 > [ 4] 78.00-79.00 sec 103 MBytes 864 Mbits/sec 73371 > [ 4] 79.00-80.00 sec 108 MBytes 907 Mbits/sec 77034 > [ 4] 80.00-81.00 sec 107 MBytes 900 Mbits/sec 76423 > [ 4] 81.00-82.00 sec 104 MBytes 875 Mbits/sec 74277 > [ 4] 82.00-83.00 sec 113 MBytes 950 Mbits/sec 80666 > > > Thinking that perhaps I was seeing loss in the rx ring, I used ethtool > to increase that from the default 256 to 4096... > > only to hang things thoroughly... :( and I'm watching things reboot now. > > Netperf does not have a multi-hop capable udp flood test (rick jones > can explain why... ) > > As I recall on this thread iperf3 was being run on a mac box as a > client, and I'll dig one up - but was it also osx on the other side of > the test? > > And what other params would I tweak on linux to see a udp flood go faster? I would try making packets smaller (-l), maybe they are fragmented somewhere. > Topology looks like this: > > apu1 <-> apu2 <-> switch <-> nuc. > > I could put another switch in the way, I am always nervous about > invoking hw flow control... > > [1] http://www.pcengines.ch/apu2c4.htm