From: David Lang <david@lang.hm>
To: Simon Barber <simon@superduper.net>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] DSLReports Speed Test has latency measurement built-in
Date: Wed, 22 Apr 2015 10:35:23 -0700 (PDT) [thread overview]
Message-ID: <alpine.DEB.2.02.1504221031570.1097@nftneq.ynat.uz> (raw)
In-Reply-To: <14ce18b0a40.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
[-- Attachment #1: Type: TEXT/PLAIN, Size: 11865 bytes --]
Data that's received and not used doesn't really matter (a tree falls in the
woods type of thing).
The head of line blocking can cause a chunk of packets to be retransmitted, even
though the receiving machine got them the first time. So looking at the received
bytes gives you a false picture of what is going on.
David Lang
On Wed, 22 Apr 2015, Simon Barber wrote:
> The bumps are due to packet loss causing head of line blocking. Until the
> lost packet is retransmitted the receiver can't release any subsequent
> received packets to the application due to the requirement for in order
> delivery. If you counted received bytes with a packet counter rather than
> looking at application level you would be able to illustrate that data was
> being received smoothly (even though out of order).
>
> Simon
>
> Sent with AquaMail for Android
> http://www.aqua-mail.com
>
>
> On April 21, 2015 7:21:09 AM David Lang <david@lang.hm> wrote:
>
>> On Tue, 21 Apr 2015, jb wrote:
>>
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> > pause > until there is that much data outstanding, or they get a timeout
>> of
>> > a packet as > a signal to slow down.
>> >
>> >> and because you have a gig-E link locally, your machine generates
>> traffic
>> > \
>> >> very rapidly, until all that data is 'in flight'. but it's really
>> sitting
>> > in the buffer of
>> >> router trying to get through.
>> >
>> > Hmm, then I have a quandary because I can easily solve the nasty bumpy
>> > upload graphs by keeping the advertised receive window on the server
>> capped
>> > low, however then, paradoxically, there is no more sign of buffer bloat
>> in
>> > the result, at least for the upload phase.
>> >
>> > (The graph under the upload/download graphs for my results shows almost
>> no
>> > latency increase during the upload phase, now).
>> >
>> > Or, I can crank it back open again, serving people with fiber connections
>> > without having to run heaps of streams in parallel -- and then have
>> people
>> > complain that the upload result is inefficient, or bumpy, vs what they
>> > expect.
>>
>> well, many people expect it to be bumpy (I've heard ISPs explain to
>> customers
>> that when a link is full it is bumpy, that's just the way things work)
>>
>> > And I can't offer an option, because the server receive window (I think)
>> > cannot be set on a case by case basis. You set it for all TCP and forget
>> it.
>>
>> I think you are right
>>
>> > I suspect you guys are going to say the server should be left with a
>> large
>> > max receive window.. and let people complain to find out what their issue
>> > is.
>>
>> what is your customer base? how important is it to provide faster service
>> to teh
>> fiber users? Are they transferring ISO images so the difference is
>> significant
>> to them? or are they downloading web pages where it's the difference
>> between a
>> half second and a quarter second? remember that you are seeing this on the
>> upload side.
>>
>> in the long run, fixing the problem at the client side is the best thing to
>> do,
>> but in the meantime, you sometimes have to work around broken customer
>> stuff.
>>
>> > BTW my setup is wire to billion 7800N, which is a DSL modem and router. I
>> > believe it is a linux based (judging from the system log) device.
>>
>> if it's linux based, it would be interesting to learn what sort of settings
>> it
>> has. It may be one of the rarer devices that has something in place already
>> to
>> do active queue management.
>>
>> David Lang
>>
>> > cheers,
>> > -Justin
>> >
>> > On Tue, Apr 21, 2015 at 2:47 PM, David Lang <david@lang.hm> wrote:
>> >
>> >> On Tue, 21 Apr 2015, jb wrote:
>> >>
>> >> I've discovered something perhaps you guys can explain it better or
>> shed
>> >>> some light.
>> >>> It isn't specifically to do with buffer bloat but it is to do with TCP
>> >>> tuning.
>> >>>
>> >>> Attached is two pictures of my upload to New York speed test server
>> with 1
>> >>> stream.
>> >>> It doesn't make any difference if it is 1 stream or 8 streams, the
>> picture
>> >>> and behaviour remains the same.
>> >>> I am 200ms from new york so it qualifies as a fairly long (but not very
>> >>> fat) pipe.
>> >>>
>> >>> The nice smooth one is with linux tcp_rmem set to '4096 32768 65535'
>> (on
>> >>> the server)
>> >>> The ugly bumpy one is with linux tcp_rmem set to '4096 65535 67108864'
>> (on
>> >>> the server)
>> >>>
>> >>> It actually doesn't matter what that last huge number is, once it goes
>> >>> much
>> >>> about 65k, e.g. 128k or 256k or beyond things get bumpy and ugly on the
>> >>> upload speed.
>> >>>
>> >>> Now as I understand this setting, it is the tcp receive window that
>> Linux
>> >>> advertises, and the last number sets the maximum size it can get to
>> (for
>> >>> one TCP stream).
>> >>>
>> >>> For users with very fast upload speeds, they do not see an ugly bumpy
>> >>> upload graph, it is smooth and sustained.
>> >>> But for the majority of users (like me) with uploads less than 5 to
>> >>> 10mbit,
>> >>> we frequently see the ugly graph.
>> >>>
>> >>> The second tcp_rmem setting is how I have been running the speed test
>> >>> servers.
>> >>>
>> >>> Up to now I thought this was just the distance of the speedtest from
>> the
>> >>> interface: perhaps the browser was buffering a lot, and didn't feed
>> back
>> >>> progress but now I realise the bumpy one is actually being influenced
>> by
>> >>> the server receive window.
>> >>>
>> >>> I guess my question is this: Why does ALLOWING a large receive window
>> >>> appear to encourage problems with upload smoothness??
>> >>>
>> >>> This implies that setting the receive window should be done on a
>> >>> connection
>> >>> by connection basis: small for slow connections, large, for high speed,
>> >>> long distance connections.
>> >>>
>> >>
>> >> This is classic bufferbloat
>> >>
>> >> the receiver advertizes a large receive window, so the sender doesn't
>> >> pause until there is that much data outstanding, or they get a timeout
>> of a
>> >> packet as a signal to slow down.
>> >>
>> >> and because you have a gig-E link locally, your machine generates
>> traffic
>> >> very rapidly, until all that data is 'in flight'. but it's really
>> sitting
>> >> in the buffer of a router trying to get through.
>> >>
>> >> then when a packet times out, the sender slows down a smidge and
>> >> retransmits it. But the old packet is still sitting in a queue, eating
>> >> bandwidth. the packets behind it are also going to timeout and be
>> >> retransmitted before your first retransmitted packet gets through, so
>> you
>> >> have a large slug of data that's being retransmitted, and the first of
>> the
>> >> replacement data can't get through until the last of the old (timed out)
>> >> data is transmitted.
>> >>
>> >> then when data starts flowing again, the sender again tries to fill up
>> the
>> >> window with data in flight.
>> >>
>> >> In addition, if I cap it to 65k, for reasons of smoothness,
>> >>> that means the bandwidth delay product will keep maximum speed per
>> upload
>> >>> stream quite low. So a symmetric or gigabit connection is going to need
>> a
>> >>> ton of parallel streams to see full speed.
>> >>>
>> >>> Most puzzling is why would anything special be required on the Client
>> -->
>> >>> Server side of the equation
>> >>> but nothing much appears wrong with the Server --> Client side, whether
>> >>> speeds are very low (GPRS) or very high (gigabit).
>> >>>
>> >>
>> >> but what window sizes are these clients advertising?
>> >>
>> >>
>> >> Note that also I am not yet sure if smoothness == better throughput. I
>> >>> have
>> >>> noticed upload speeds for some people often being under their claimed
>> sync
>> >>> rate by 10 or 20% but I've no logs that show the bumpy graph is showing
>> >>> inefficiency. Maybe.
>> >>>
>> >>
>> >> If you were to do a packet capture on the server side, you would see
>> that
>> >> you have a bunch of packets that are arriving multiple times, but the
>> first
>> >> time "does't count" because the replacement is already on the way.
>> >>
>> >> so your overall throughput is lower for two reasons
>> >>
>> >> 1. it's bursty, and there are times when the connection actually is idle
>> >> (after you have a lot of timed out packets, the sender needs to ramp up
>> >> it's speed again)
>> >>
>> >> 2. you are sending some packets multiple times, consuming more total
>> >> bandwidth for the same 'goodput' (effective throughput)
>> >>
>> >> David Lang
>> >>
>> >>
>> >> help!
>> >>>
>> >>>
>> >>> On Tue, Apr 21, 2015 at 12:56 PM, Simon Barber <simon@superduper.net>
>> >>> wrote:
>> >>>
>> >>> One thing users understand is slow web access. Perhaps translating
>> the
>> >>>> latency measurement into 'a typical web page will take X seconds
>> longer
>> >>>> to
>> >>>> load', or even stating the impact as 'this latency causes a typical
>> web
>> >>>> page to load slower, as if your connection was only YY% of the
>> measured
>> >>>> speed.'
>> >>>>
>> >>>> Simon
>> >>>>
>> >>>> Sent with AquaMail for Android
>> >>>> http://www.aqua-mail.com
>> >>>>
>> >>>>
>> >>>>
>> >>>> On April 19, 2015 1:54:19 PM Jonathan Morton <chromatix99@gmail.com>
>> >>>> wrote:
>> >>>>
>> >>>>>>>> Frequency readouts are probably more accessible to the latter.
>> >>>>
>> >>>>>
>> >>>>>>>> The frequency domain more accessible to laypersons? I have my
>> >>>>>>>>
>> >>>>>>> doubts ;)
>> >>>>>
>> >>>>>>
>> >>>>>>> Gamers, at least, are familiar with “frames per second” and how
>> that
>> >>>>>>>
>> >>>>>> corresponds to their monitor’s refresh rate.
>> >>>>>
>> >>>>>>
>> >>>>>> I am sure they can easily transform back into time domain to
>> get
>> >>>>>>
>> >>>>> the frame period ;) . I am partly kidding, I think your idea is
>> great
>> >>>>> in
>> >>>>> that it is a truly positive value which could lend itself to being
>> used
>> >>>>> in
>> >>>>> ISP/router manufacturer advertising, and hence might work in the real
>> >>>>> work;
>> >>>>> on the other hand I like to keep data as “raw” as possible (not that
>> >>>>> ^(-1)
>> >>>>> is a transformation worthy of being called data massage).
>> >>>>>
>> >>>>>>
>> >>>>>> The desirable range of latencies, when converted to Hz, happens to
>> be
>> >>>>>>>
>> >>>>>> roughly the same as the range of desirable frame rates.
>> >>>>>
>> >>>>>>
>> >>>>>> Just to play devils advocate, the interesting part is time or
>> >>>>>>
>> >>>>> saving time so seconds or milliseconds are also intuitively
>> >>>>> understandable
>> >>>>> and can be easily added ;)
>> >>>>>
>> >>>>> Such readouts are certainly interesting to people like us. I have no
>> >>>>> objection to them being reported alongside a frequency readout. But
>> I
>> >>>>> think most people are not interested in “time savings” measured in
>> >>>>> milliseconds; they’re much more aware of the minute- and hour-level
>> time
>> >>>>> savings associated with greater bandwidth.
>> >>>>>
>> >>>>> - Jonathan Morton
>> >>>>>
>> >>>>> _______________________________________________
>> >>>>> Bloat mailing list
>> >>>>> Bloat@lists.bufferbloat.net
>> >>>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>>
>> >>>>>
>> >>>>
>> >>>> _______________________________________________
>> >>>> Bloat mailing list
>> >>>> Bloat@lists.bufferbloat.net
>> >>>> https://lists.bufferbloat.net/listinfo/bloat
>> >>>>
>> >>>>
>> >> _______________________________________________
>> >> Bloat mailing list
>> >> Bloat@lists.bufferbloat.net
>> >> https://lists.bufferbloat.net/listinfo/bloat
>> >>
>> >>
>> >
>>
>>
>> ----------
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
>
>
next prev parent reply other threads:[~2015-04-22 17:35 UTC|newest]
Thread overview: 183+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-19 5:26 jb
2015-04-19 7:36 ` David Lang
2015-04-19 7:48 ` David Lang
2015-04-19 9:33 ` jb
2015-04-19 10:45 ` David Lang
2015-04-19 8:28 ` Alex Burr
2015-04-19 10:20 ` Sebastian Moeller
2015-04-19 10:46 ` Jonathan Morton
2015-04-19 16:30 ` Sebastian Moeller
2015-04-19 17:41 ` Jonathan Morton
2015-04-19 19:40 ` Sebastian Moeller
2015-04-19 20:53 ` Jonathan Morton
2015-04-21 2:56 ` Simon Barber
2015-04-21 4:15 ` jb
2015-04-21 4:47 ` David Lang
2015-04-21 7:35 ` jb
2015-04-21 9:14 ` Steinar H. Gunderson
2015-04-21 14:20 ` David Lang
2015-04-21 14:25 ` David Lang
2015-04-21 14:28 ` David Lang
2015-04-21 22:13 ` jb
2015-04-21 22:39 ` Aaron Wood
2015-04-21 23:17 ` jb
2015-04-22 2:14 ` Simon Barber
2015-04-22 2:56 ` jb
2015-04-22 14:32 ` Simon Barber
2015-04-22 17:35 ` David Lang [this message]
2015-04-23 1:37 ` Simon Barber
2015-04-24 16:54 ` David Lang
2015-04-24 17:00 ` Rick Jones
2015-04-21 9:37 ` Jonathan Morton
2015-04-21 10:35 ` jb
2015-04-22 4:04 ` Steinar H. Gunderson
2015-04-22 4:28 ` Eric Dumazet
2015-04-22 8:51 ` [Bloat] RE : " luca.muscariello
2015-04-22 12:02 ` jb
2015-04-22 13:08 ` Jonathan Morton
[not found] ` <14ce17a7810.27f7.e972a4f4d859b00521b2b659602cb2f9@superduper.net>
2015-04-22 14:15 ` Simon Barber
2015-04-22 13:50 ` [Bloat] " Eric Dumazet
2015-04-22 14:09 ` Steinar H. Gunderson
2015-04-22 15:26 ` [Bloat] RE : " luca.muscariello
2015-04-22 15:44 ` [Bloat] " Eric Dumazet
2015-04-22 16:35 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:16 ` Eric Dumazet
2015-04-22 17:24 ` Steinar H. Gunderson
2015-04-22 17:28 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 17:45 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 5:27 ` MUSCARIELLO Luca IMT/OLN
2015-04-23 6:48 ` Eric Dumazet
[not found] ` <CAH3Ss96VwE_fWNMOMOY4AgaEnVFtCP3rPDHSudOcHxckSDNMqQ@mail.gmail.com>
2015-04-23 10:08 ` jb
2015-04-24 8:18 ` Sebastian Moeller
2015-04-24 8:29 ` Toke Høiland-Jørgensen
2015-04-24 8:55 ` Sebastian Moeller
2015-04-24 9:02 ` Toke Høiland-Jørgensen
2015-04-24 13:32 ` jb
2015-04-24 13:58 ` Toke Høiland-Jørgensen
2015-04-24 16:51 ` David Lang
2015-04-25 3:15 ` Simon Barber
2015-04-25 4:04 ` Dave Taht
2015-04-25 4:26 ` Simon Barber
2015-04-25 6:03 ` Sebastian Moeller
2015-04-27 16:39 ` Dave Taht
2015-04-28 7:18 ` Sebastian Moeller
2015-04-28 8:01 ` David Lang
2015-04-28 8:19 ` Toke Høiland-Jørgensen
2015-04-28 15:42 ` David Lang
2015-04-28 8:38 ` Sebastian Moeller
2015-04-28 12:09 ` Rich Brown
2015-04-28 15:26 ` David Lang
2015-04-28 15:39 ` David Lang
2015-04-28 11:04 ` Mikael Abrahamsson
2015-04-28 11:49 ` Sebastian Moeller
2015-04-28 12:24 ` Mikael Abrahamsson
2015-04-28 13:44 ` Sebastian Moeller
2015-04-28 19:09 ` Rick Jones
2015-04-28 14:06 ` Dave Taht
2015-04-28 14:02 ` Dave Taht
2015-05-06 5:08 ` Simon Barber
2015-05-06 8:50 ` Sebastian Moeller
2015-05-06 15:30 ` Jim Gettys
2015-05-06 18:03 ` Sebastian Moeller
2015-05-06 20:25 ` Jonathan Morton
2015-05-06 20:43 ` Toke Høiland-Jørgensen
2015-05-07 7:33 ` Sebastian Moeller
2015-05-07 4:29 ` Mikael Abrahamsson
2015-05-07 7:08 ` jb
2015-05-07 7:18 ` Jonathan Morton
2015-05-07 7:24 ` Mikael Abrahamsson
2015-05-07 7:40 ` Sebastian Moeller
2015-05-07 9:16 ` Mikael Abrahamsson
2015-05-07 10:44 ` jb
2015-05-07 11:36 ` Sebastian Moeller
2015-05-07 11:44 ` Mikael Abrahamsson
2015-05-07 13:10 ` Jim Gettys
2015-05-07 13:18 ` Mikael Abrahamsson
2015-05-07 13:14 ` jb
2015-05-07 13:26 ` Neil Davies
2015-05-07 14:45 ` Simon Barber
2015-05-07 22:27 ` Dave Taht
2015-05-07 22:45 ` Dave Taht
2015-05-07 23:09 ` Dave Taht
2015-05-08 2:05 ` jb
2015-05-08 4:16 ` David Lang
2015-05-08 3:54 ` Eric Dumazet
2015-05-08 4:20 ` Dave Taht
2015-05-08 13:20 ` [Bloat] DSLReports Jitter/PDV test Rich Brown
2015-05-08 14:22 ` jb
2015-05-07 7:37 ` [Bloat] DSLReports Speed Test has latency measurement built-in Sebastian Moeller
2015-05-07 7:19 ` Mikael Abrahamsson
2015-05-07 6:19 ` Sebastian Moeller
2015-04-25 3:23 ` Simon Barber
2015-04-24 15:20 ` Bill Ver Steeg (versteb)
2015-04-25 2:24 ` Simon Barber
2015-04-23 10:17 ` renaud sallantin
2015-04-23 14:10 ` Eric Dumazet
2015-04-23 14:38 ` renaud sallantin
2015-04-23 15:52 ` Jonathan Morton
2015-04-23 16:00 ` Simon Barber
2015-04-23 13:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-22 18:22 ` Eric Dumazet
2015-04-22 18:39 ` [Bloat] Pacing --- was " MUSCARIELLO Luca IMT/OLN
2015-04-22 19:05 ` Jonathan Morton
2015-04-22 15:59 ` [Bloat] RE : " Steinar H. Gunderson
2015-04-22 16:16 ` Eric Dumazet
2015-04-22 16:19 ` Dave Taht
2015-04-22 17:15 ` Rick Jones
2015-04-19 12:14 ` [Bloat] " Toke Høiland-Jørgensen
-- strict thread matches above, loose matches on Subject: below --
2015-04-19 12:56 jb
2015-04-19 13:10 ` Toke Høiland-Jørgensen
2015-04-19 13:53 ` jb
2015-04-19 15:38 ` Toke Høiland-Jørgensen
2015-04-19 16:38 ` Toke Høiland-Jørgensen
2015-04-19 17:15 ` Mikael Abrahamsson
2015-04-19 17:43 ` Dave Taht
2015-04-19 19:22 ` Dave Taht
2015-04-23 17:03 ` Dave Taht
2015-04-23 18:04 ` Mikael Abrahamsson
2015-04-23 18:08 ` Jonathan Morton
2015-04-23 20:19 ` jb
2015-04-23 20:39 ` Dave Taht
2015-04-24 21:45 ` Rich Brown
2015-04-25 1:14 ` jb
2015-04-23 21:44 ` Rich Brown
2015-04-23 22:22 ` Dave Taht
2015-04-23 22:29 ` Dave Taht
2015-04-24 1:58 ` Rich Brown
2015-04-24 2:40 ` Dave Taht
2015-04-24 3:20 ` Jim Gettys
2015-04-24 3:39 ` Dave Taht
2015-04-24 4:04 ` Dave Taht
2015-04-24 4:17 ` Dave Taht
2015-04-24 16:13 ` Rick Jones
2015-04-24 5:00 ` jb
2015-04-27 16:28 ` Dave Taht
2015-04-24 16:09 ` Rick Jones
2015-04-24 13:49 ` Pedro Tumusok
2015-04-23 22:51 ` David Lang
2015-04-24 1:38 ` Rich Brown
2015-04-24 4:16 ` Mikael Abrahamsson
2015-04-24 4:24 ` Dave Taht
2015-04-19 17:45 ` Toke Høiland-Jørgensen
2015-04-19 18:26 ` Jonathan Morton
2015-04-19 18:30 ` Toke Høiland-Jørgensen
2015-04-19 19:15 ` Jonathan Morton
2015-04-20 3:15 ` Aaron Wood
2015-04-20 7:00 ` jb
[not found] ` <CACQiMXbF9Uk3H=81at-Z9a2fdYKrRtRorSXRg5dBcPB8-aR4Cw@mail.gmail.com>
2015-04-20 8:11 ` jb
2015-04-19 19:19 ` Mikael Abrahamsson
2015-04-19 21:57 ` Rich Brown
2015-04-19 23:21 ` jb
2015-04-20 14:51 ` David Lang
2015-04-20 15:51 ` Dave Taht
2015-04-20 16:15 ` Dave Taht
2015-04-19 0:57 Rich Brown
2015-04-19 4:01 ` Dave Taht
2015-04-20 14:33 ` Colin Dearborn
2015-04-19 8:29 ` Dave Taht
2015-04-19 8:38 ` Dave Taht
2015-04-19 12:21 ` jb
2015-04-19 9:17 ` MUSCARIELLO Luca IMT/OLN
2015-04-19 12:03 ` jb
2015-04-19 10:53 ` dikshie
2015-04-19 12:11 ` jb
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.02.1504221031570.1097@nftneq.ynat.uz \
--to=david@lang.hm \
--cc=bloat@lists.bufferbloat.net \
--cc=simon@superduper.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox