From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ie0-x235.google.com (mail-ie0-x235.google.com [IPv6:2607:f8b0:4001:c03::235]) (using TLSv1 with cipher RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (verified OK)) by huchra.bufferbloat.net (Postfix) with ESMTPS id 8360121F2D3 for ; Tue, 21 Apr 2015 19:56:08 -0700 (PDT) Received: by iejt8 with SMTP id t8so28200696iej.2 for ; Tue, 21 Apr 2015 19:56:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:cc:content-type; bh=jJDt0Q5cipGAGIJhaPle4JmrwSsFPqY8/h037qPsQXQ=; b=jXvltzPYgX1zE+y13Sn6apbpWEZq5ICfi+a1P2G1lnwF0yebCpru3IIFxuzMkOf54b 7wbUsnP06BXR/Zm7LKf4pYeh3v/Pa1oHX70gvjdnuzxf46ffzf3tj4a7qa+HJJRPywri +rKak0WX6xYXZkmkyg4jgX9QthTinm/q2ONYQC4sorJbJN3CQHHTgBUvJ6ER3XsbY2pf GfdMipdjVuDZIenwq63XhtiON5SPdniaIfNz2n/DuGfgfHLFznNM5pw52mnOG2e0oGeN fiI5wh8igcX2c1jAEweFH+ZG6LnUf6SaWCFB5AbhH/uOKcQXhCF96VlHDIIEggaObR5Z QaHQ== MIME-Version: 1.0 X-Received: by 10.50.132.33 with SMTP id or1mr1432477igb.31.1429671367269; Tue, 21 Apr 2015 19:56:07 -0700 (PDT) Sender: justinbeech@gmail.com Received: by 10.50.107.42 with HTTP; Tue, 21 Apr 2015 19:56:07 -0700 (PDT) In-Reply-To: <14cdee7f9d8.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> References: <75C1DDFF-FBD2-4825-A167-92DFCF6A7713@gmail.com> <8AD4493E-EA21-496D-923D-B4257B078A1C@gmx.de> <8E4F61CA-4274-4414-B4C0-F582167D66D6@gmx.de> <2C987A4B-7459-43C1-A49C-72F600776B00@gmail.com> <14cd9e74e48.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> <14cdee7f9d8.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net> Date: Wed, 22 Apr 2015 12:56:07 +1000 X-Google-Sender-Auth: w2rS729Ui1XX0Y7CEtLKoQ7VyOw Message-ID: From: jb To: Simon Barber Content-Type: multipart/alternative; boundary=047d7b1636e1fde6360514474d2e Cc: bloat Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in X-BeenThere: bloat@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: General list for discussing Bufferbloat List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2015 02:56:36 -0000 --047d7b1636e1fde6360514474d2e Content-Type: text/plain; charset=UTF-8 That makes sense. Ok. On Wed, Apr 22, 2015 at 12:14 PM, Simon Barber wrote: > If you set the window only a little bit larger than the actual BDP of > the link then there will only be a little bit of data to fill buffer, so > given large buffers it will take many connections to overflow the buffer. > > Simon > > Sent with AquaMail for Android > http://www.aqua-mail.com > > On April 21, 2015 4:18:10 PM jb wrote: > >> Regarding the low TCP RWIN max setting, and smoothness. >> >> One remark up-thread still bothers me. It was pointed out (and it makes >> sense to me) that if you set a low TCP max rwin it is per stream, but if >> you do multiple streams you are still going to rush the soho buffer. >> >> However my observation with a low server rwin max was that the smooth >> upload graph was the same whether I did 1 upload stream or 6 upload >> streams, or apparently any number. >> I would have thought that with 6 streams, the PC is going to try to flood >> 6x as much data as 1 stream, and this would put you back to square one. >> However this was not what happened. It was puzzling that no matter what, >> one setting server side got rid of the chop. >> Anyone got any plausible explanations for this ? >> >> if not, I'll run some more tests with 1, 6 and 12, to a low rwin server, >> and post the graphs to the list. I might also have to start to graph the >> interface traffic on a sub-second level, rather than the browser traffic, >> to make sure the browser isn't lying about the stalls and chop. >> >> This 7800N has setting for priority of traffic, and utilisation (as a >> percentage). Utilisation % didn't help, but priority helped. Making web low >> priority and SSH high priority smoothed things out a lot without changing >> the speed. Perhaps "low" priority means it isn't so eager to fill its >> buffers.. >> >> thanks >> >> >> On Wed, Apr 22, 2015 at 8:13 AM, jb wrote: >> >>> Today I've switched it back to large receive window max. >>> >>> The customer base is everything from GPRS to gigabit. But I know from >>> experience that if a test doesn't flatten someones gigabit connection they >>> will immediately assume "oh congested servers, insufficient capacity" and >>> the early adopters of fiber to the home and faster cable products are the >>> most visible in tech forums and so on. >>> >>> It would be interesting to set one or a few servers with a small receive >>> window, take them from the pool, and allow an option to select those, >>> otherwise they would not participate in any default run. Then as you point >>> out, the test can suggest trying those as an option for results with >>> chaotic upload speeds and probable bloat. The person would notice the >>> beauty of the more intimate connection between their kernel and a server, >>> and work harder to eliminate the problematic equipment. Or. They'd stop >>> telling me the test was bugged. >>> >>> thanks >>> >>> >>> On Wed, Apr 22, 2015 at 12:28 AM, David Lang wrote: >>> >>>> On Tue, 21 Apr 2015, David Lang wrote: >>>> >>>> On Tue, 21 Apr 2015, David Lang wrote: >>>>> >>>>> I suspect you guys are going to say the server should be left with a >>>>>>> large >>>>>>> max receive window.. and let people complain to find out what their >>>>>>> issue >>>>>>> is. >>>>>>> >>>>>> >>>>>> what is your customer base? how important is it to provide faster >>>>>> service to teh fiber users? Are they transferring ISO images so the >>>>>> difference is significant to them? or are they downloading web pages where >>>>>> it's the difference between a half second and a quarter second? remember >>>>>> that you are seeing this on the upload side. >>>>>> >>>>>> in the long run, fixing the problem at the client side is the best >>>>>> thing to do, but in the meantime, you sometimes have to work around broken >>>>>> customer stuff. >>>>>> >>>>> >>>>> for the speedtest servers, it should be set large, the purpose is to >>>>> test the quality of the customer stuff, so you don't want to do anything on >>>>> your end that papers over the problem, only to have the customer think >>>>> things are good and experience problems when connecting to another server >>>>> that doesn't implement work-arounds. >>>>> >>>> >>>> Just after hitting send it occured to me that it may be the right thing >>>> to have the server that's being hit by the test play with these settings. >>>> If the user works well at lower settings, but has problems at higher >>>> settings, the point where they start having problems may be useful to know. >>>> >>>> David Lang >>>> _______________________________________________ >>>> Bloat mailing list >>>> Bloat@lists.bufferbloat.net >>>> https://lists.bufferbloat.net/listinfo/bloat >>>> >>>> >>> >> _______________________________________________ >> Bloat mailing list >> Bloat@lists.bufferbloat.net >> https://lists.bufferbloat.net/listinfo/bloat >> >> --047d7b1636e1fde6360514474d2e Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
That makes sense. Ok.

=
On Wed, Apr 22, 2015 at 12:14 PM, Simon Barber <= span dir=3D"ltr"><simon@superduper.net> wrote:

If you set the window only a little bit larger than the actual BDP of the link then there will only be a little bit of data to fill buffer, so given large buffers it will take many connections to overflow the buffer.

Simon

Sent with AquaMail for Android http://www.aqua-mail= .com

On April 21, 2015 4:18:10 PM jb <justin@dslr.net> wrote:

Regar= ding the low TCP RWIN max setting, and smoothness.

One remark up-thread still bothers me. It was pointed out (and it makes sense to me) that if you set a low TCP max rwin it is per stream, but if you do multiple streams you are still going to rush the soho buffer.

However my observation with a low server rwin max was that the smooth upload graph was the same whether I did 1 upload stream or 6 upload streams, or apparently any number.
I would have thought that with 6 streams, the PC is going to try to flood 6x as much data as 1 stream, and this would put you back to square one. However this was not what happened. It was puzzling that no matter what, one setting server side got rid of the chop.
Anyone got any plausible explanations for this ?

if not, I'll run some more tests with 1, 6 and 12, to a low rwin server, and post the graphs to the list. I might also have to start to graph the interface traffic on a sub-second level, rather than the browser traffic, to make sure the browser isn't lying about the stalls and chop.

This 7800N has setting for priority of traffic, and utilisation (as a percentage). Utilisation % didn't help, but priority helped. Making web low priority and SSH high priority smoothed things out a lot without changing the speed. Perhaps "low" priority means it isn't so eager to fill its buffers..

thanks

<= div class=3D"gmail_extra">
On Wed, Apr 22, 20= 15 at 8:13 AM, jb <justin@dslr.net> wrote:
Today I've switched it back to large receive = window max.

The customer base is everything from GPRS to gigabit. But I know from experience that if a test doesn't flatten someones gigabit connection they will immediately assume "oh congested servers, insufficient capacity" and the early adopters of fiber to the home and faster cable products are the most visible in tech forums and so on.

It would be interesting to set one or a few servers with a small receive window, take them from the pool, and allow an option to select those, otherwise they would not participate in any default run. Then as you point out, the test can suggest trying those as an option for results with chaotic upload speeds and probable bloat. The person would notice the beauty of the more intimate connection between their kernel and a server, and work harder to eliminate the problematic equipment. Or. They'd stop telling me the test was bugged.

thanks


On Wed, Apr 22, 2015 at 12:28 AM, David Lang <david@lang.hm> wrote:
On= Tue, 21 Apr 2015, David Lang wrote:

On Tue, 21 Apr 2015, David Lang wrote:

I suspect you guys are going to say the server should be left with a large<= br> max receive window.. and let people complain to find out what their issue is.

what is your customer base? how important is it to provide faster service to teh fiber users? Are they transferring ISO images so the difference is significant to them? or are they downloading web pages where it's the difference between a half second and a quarter second? remember that you are seeing this on the upload side.

in the long run, fixing the problem at the client side is the best thing to do, but in the meantime, you sometimes have to work around broken customer stuff.

for the speedtest servers, it should be set large, the purpose is to test the quality of the customer stuff, so you don't want to do anything on your end that papers over the problem, only to have the customer think things are good and experience problems when connecting to another server that doesn't implement work-arounds.

Just after hitting send it occured to me that it may be the right thing to have the server that's being hit by the test play with these settings. If the user works well at lower settings, but has problems at higher settings, the point where they start having problems may be useful to know.

David Lang

_____________________________= __________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
= https://lists.bufferbloat.net/listinfo/bloat



_______________________________________________
Bloat mailing list
Bloat@list= s.bufferbloat.net
= https://lists.bufferbloat.net/listinfo/bloat


--047d7b1636e1fde6360514474d2e--