From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-x229.google.com (mail-wm0-x229.google.com [IPv6:2a00:1450:400c:c09::229]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 806FE3B2A4 for ; Fri, 13 Oct 2017 14:47:21 -0400 (EDT) Received: by mail-wm0-x229.google.com with SMTP id 196so28958044wma.1 for ; Fri, 13 Oct 2017 11:47:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=AWAl9qlIAmG3n0KGpeVvMcvEvysoaGxNk+LZD/Gg6PY=; b=Md3SDTx3CbF6TLEBhVc67NZ+SCWtrh6WRqh9HJTKDTVWzIfH+g2L552ZIUZSOZokwN 1kfln23ugcI4FMgp6X1Hr51+GPb38A6brqpiRx4hCWLdIA/VsAlSnGKc5Qyx77ltMWTL neBsuiRJ0zqkA3NMWf+IK33/AMC+RYJ2TMOvY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=AWAl9qlIAmG3n0KGpeVvMcvEvysoaGxNk+LZD/Gg6PY=; b=Rg9wxdo18TL0Pa5YoDQlNPKyzF+1e8tq7fxptajBpWY7+OknEIU0Dt7LEwKegULk/I MPTZPufUrklGlOFamn1MW6c7X7eciikCpHVcMy74tdxavCCpstHpUtB+mPVu2dNnNTs4 xWDZvcQDHtQZCb87N56cHnZ+z8wt32hN7OS953zJK7DCxm8k967+XGKzo1FwwbiedpVe EOdwVy93l9+Du0Bnj2MfQCsOcTBqpt+lXuRe1Ff+ddrWczMFl/4sPwr/QjK9cqMbnSaF MMHmYQ+qHsWVzUKYa6GdTLJQ03XcoFLzcmKli7dFO8bPJfgtXKNDUHXGsUQUy+BoPx9m GL/w== X-Gm-Message-State: AMCzsaWayxH1F5yhvgS3nQo6sz3CgOtQQoYNFoh1kEl6b6NrGSCkFfr/ o6E6stDfWjhsMhVRg518DMY0nZI1cyYsMqM7BXKROQ== X-Google-Smtp-Source: AOwi7QAtarT6Xq58rD3XGQ+EYH/lSqoT7sqOxCqTIU0RGCghz5pX4zKftcfieGrtZNyrWRn6y7BfRBeweaWmZv6NVkM= X-Received: by 10.80.182.116 with SMTP id c49mr3269379ede.124.1507920439941; Fri, 13 Oct 2017 11:47:19 -0700 (PDT) MIME-Version: 1.0 Received: by 10.80.204.220 with HTTP; Fri, 13 Oct 2017 11:47:19 -0700 (PDT) In-Reply-To: <87mv4v9z2a.fsf@toke.dk> References: <1507581711.45638427@apps.rackspace.com> <20171011233056.3634daea@redhat.com> <877ew0bwb3.fsf@toke.dk> <87mv4v9z2a.fsf@toke.dk> From: Bob McMahon Date: Fri, 13 Oct 2017 11:47:19 -0700 Message-ID: To: =?UTF-8?B?VG9rZSBIw7hpbGFuZC1Kw7hyZ2Vuc2Vu?= Cc: Jesper Dangaard Brouer , make-wifi-fast@lists.bufferbloat.net, Johannes Berg Content-Type: multipart/alternative; boundary="94eb2c0df8c42c58a2055b721637" Subject: Re: [Make-wifi-fast] less latency, more filling... for wifi X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 13 Oct 2017 18:47:21 -0000 --94eb2c0df8c42c58a2055b721637 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Toke, The other thing that will cause the server thread(s) and listener thread to stop is -t when applied to the *server*, i.e. iperf -s -u -t 10 will cause a 10 second timeout for the server/listener thread(s) life. Some people don't want the Listener to stop so when -D (daemon) is applied, the -t will only terminate server trafffic threads. Many people asked for this because they wanted a way to time bound these threads, specifically over the life of many tests. Yeah, summing is a bit of a mess. I've some proto code I've been playing with but still not sure what is going to be released. For UDP, the source port must be unique per the quintuple (ip proto/src ip/ src port/ dst ip/ dst port). Since the UDP server is merely waiting for packets it doesn't have an knowledge about how to group. So it groups based upon time, i.e. when a new traffic shows up it's put an existing active group for summing. I'm not sure a good way to fix this. I think the client would have to modify the payload, and per a -P tell the server the udp src ports that belong in the same group. Then the server could assign groups based upon a key in the payload. Thoughts and comments welcome, Bob On Fri, Oct 13, 2017 at 2:28 AM, Toke H=C3=B8iland-J=C3=B8rgensen wrote: > Bob McMahon writes: > > > Thanks Toke. Let me look into this. Is there packet loss during your > > tests? Can you share the output of the client and server per the error > > scenario? > > Yeah, there's definitely packet loss. > > > With iperf 2 there is no TCP test exchange rather UDP test information > > is derived from packets in flight. The server determines a UDP test is > > finished by detecting a negative sequence number in the payload. In > > theory, this should separate UDP tests. The server detects a new UDP > > stream is by receiving a packet from a new source socket. If the > > packet carrying the negative sequence number is lost then summing > > across "tests" would be expected (even though not desired) per the > > current design and implementation. We intentionally left this as is as > > we didn't want to change the startup behavior nor require the network > > support TCP connections in order to run a UDP test. > > Ah, so basically, if the last packet from the client is dropped, the > server is not going to notice that the test ended and just keep > counting? That would definitely explain the behaviour I'm seeing. > > So if another test starts from a different source port, the server is > still going to count the same totals? That seems kinda odd :) > > > Since we know UDP is unreliable, we do control both client and server > over > > ssh pipes, and perform summing in flight per the interval reporting. > > Operating system signals are used to kill the server. The iperf sum > and > > final reports are ignored. Unfortunately, I can't publish this packag= e > > with iperf 2 for both technical and licensing reasons. There is some > skeleton > > code in Python 3.5 with asyncio > > > that > > may be of use. A next step here is to add support for pandas > > , and possibly some control chart > > techniques (both single > and > > multivariate > > ) for > both > > regressions and outlier detection. > > No worries, I already have the setup scripts to handle restarting the > server, and I parse the output with Flent. Just wanted to point out this > behaviour as it was giving me some very odd results before I started > systematically restarting the server... > > -Toke > --94eb2c0df8c42c58a2055b721637 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Toke,

The other thing that will cause the server= thread(s) and listener thread to stop is -t when applied to the *server*, = i.e. iperf -s -u -t 10 will cause a 10 second timeout for the server/listen= er thread(s) life.=C2=A0 =C2=A0Some people don't want the Listener to s= top so when -D (daemon) is applied, the -t will only terminate server traff= fic threads.=C2=A0 =C2=A0Many people asked for this because they wanted a w= ay to time bound these threads, specifically over the life of many tests.
Yeah, summing is a bit of a mess.=C2=A0 I've some proto code I= 9;ve been playing with but still not sure what is going to be released.=C2= =A0

For UDP, the source port must be unique per the quintuple (ip p= roto/src ip/ src port/ dst ip/ dst port).=C2=A0 Since the UDP server is mer= ely waiting for packets it doesn't have an knowledge about how to group= .=C2=A0 So it groups based upon time, i.e. when a new traffic shows up it&#= 39;s put an existing active group for summing.=C2=A0

I'm not su= re a good way to fix this.=C2=A0 I think the client would have to modify th= e payload, and=C2=A0 per a -P tell the server the udp src ports that belong= in the same group.=C2=A0 Then the server could assign groups based upon a = key in the payload.

Thoughts and comments welcome,
Bob

On Fri, Oct 13, 2017 a= t 2:28 AM, Toke H=C3=B8iland-J=C3=B8rgensen <toke@toke.dk> wrot= e:
Bob McMahon <bob.mcmahon@broadcom.com> wri= tes:

> Thanks Toke. Let me look into this. Is there packet loss during your > tests? Can you share the output of the client and server per the error=
> scenario?

Yeah, there's definitely packet loss.

> With iperf 2 there is no TCP test exchange rather UDP test information=
> is derived from packets in flight. The server determines a UDP test is=
> finished by detecting a negative sequence number in the payload. In > theory, this should separate UDP tests. The server detects a new UDP > stream is by receiving a packet from a new source socket. If the
> packet carrying the negative sequence number is lost then summing
> across "tests" would be expected (even though not desired) p= er the
> current design and implementation. We intentionally left this as is as=
> we didn't want to change the startup behavior nor require the netw= ork
> support TCP connections in order to run a UDP test.

Ah, so basically, if the last packet from the client is dropped, the=
server is not going to notice that the test ended and just keep
counting? That would definitely explain the behaviour I'm seeing.

So if another test starts from a different source port, the server is
still going to count the same totals? That seems kinda odd :)

> Since we know UDP is unreliable, we do control both client and server = over
> ssh pipes, and perform summing in flight per the interval reporting. >=C2=A0 Operating system signals are used to kill the server.=C2=A0 =C2= =A0 The iperf sum and
> final reports are ignored.=C2=A0 =C2=A0Unfortunately, I can't publ= ish this package
> with iperf 2 for both technical and licensing reasons.=C2=A0 =C2=A0The= re is some skeleton
> code in Python 3.5 with asyncio
> <https://sourcefor= ge.net/p/iperf2/code/ci/master/tree/flows/flows.py> that > may be of use.=C2=A0 =C2=A0A next step here is to add= support for pandas
> <http://pandas.pydata.org/index.html>,= and possibly some control chart
> <https://en.wikipedia.org/wiki/Control_chart= > techniques (both single and
> multivariate
> <http://www.itl.nist.gov/= div898/handbook/pmc/section3/pmc34.htm>) for both
> regressions and outlier detection.

No worries, I already have the setup scripts to handle restarting the
server, and I parse the output with Flent. Just wanted to point out this behaviour as it was giving me some very odd results before I started
systematically restarting the server...

-Toke

--94eb2c0df8c42c58a2055b721637--