General list for discussing Bufferbloat
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: "Jonas Mårtensson" <martensson.jonas@gmail.com>,
	bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] powerboost and sqm
Date: Fri, 29 Jun 2018 10:00:13 -0400	[thread overview]
Message-ID: <CAA93jw7cmzD30ZVNtTo407Jbx_whneoTS0fkY=JaB5dqLfxNtw@mail.gmail.com> (raw)
In-Reply-To: <D125DDA5-CF61-4FD2-8497-3B42E91B2EE6@gmail.com>

That is a remarkably good explanation and should end up on a website
or blog somewhere. A piece targetted
at gamers with the hz values....
On Fri, Jun 29, 2018 at 8:22 AM Jonathan Morton <chromatix99@gmail.com> wrote:
>
> > On 29 Jun, 2018, at 1:56 pm, Jonas Mårtensson <martensson.jonas@gmail.com> wrote:
> >
> > So, what do you think:
> >
> > - Are the latency spikes real? The fact that they disappear with sqm suggests so but what could cause such short spikes? Is it related to the powerboost?
>
> I think I can explain what's going on here.  TL;DR - the ISP is still using a dumb FIFO, but have resized it to something reasonable.  The latency spikes result from an interaction between several systems.
>
> "PowerBoost" is just a trade name for the increasingly-common practice of configuring a credit-mode shaper (typically a token bucket filter, or TBF for short) with a very large bucket.  It refills that bucket at a fixed rate (100Mbps in your case), up to a specified maximum, and drains it proportionately for every byte sent over the link.  Packets are sent when there are enough tokens in the bucket to transmit them in full, and not before.  There may be a second TBF with a much smaller bucket, to enforce a limit on the burst rate (say 300Mbps) - but let's assume that isn't used here, so the true limit is the 1Gbps link rate.
>
> Until the bucket empties, packets are sent over the link as soon as they arrive, so the observed latency is minimal and the throughput converges on a high value.  The moment the bucket empties, however, packets are queued and throughput instantaneously drops to 100Mbps.  The queue fills up quickly and overflows; packets are then dropped.  TCP rightly interprets this as its cue to back off.
>
> The maximum inter-flow induced latency is consistently about 125ms.  This is roughly what you'd expect from a dumb FIFO that's been sized to 1x BDP, and *much* better than typical ISP configurations to date.  I'd still much rather have the sub-millisecond induced latencies that Cake achieves, but this is a win for the average Joe Oblivious.
>
> So why the spikes?  Well, TCP backs off when it sees the packet losses, and it continues to do so until the queue drains enough to stop losing packets.  However, this leaves TCP transmitting at less than the shaped rate, so the TBF starts filling its bucket with the leftovers.  Latency returns to minimum because the queue is empty.  TCP then gradually grows its congestion window again to probe the path; different TCPs do this in different patterns, but it'll generally take much more than one RTT at this bandwidth.  Windows, I believe, increases cwnd by one packet per RTT (ie. is Reno-like).
>
> So by the time TCP gets back up to 100Mbps, the TBF has stored quite a few spare tokens in its bucket.  These now start to be spent, while TCP continues probing *past* 100Mbps, still not seeing the true limit.  By the time the bucket empties, TCP is transmitting considerably faster than 100Mbps, such that the *average* throughput since the last loss is *exactly* 100Mbps - this last is what TBF is designed to enforce; it technically doesn't start with a full bucket but an empty one, but the bucket usually fills up before the user gets around to measuring it.
>
> So then the TBF runs out of spare tokens and slams on the brakes, the queue rapidly fills and overflows, packets are lost, TCP reels, retransmits, and backs off again.  Rinse, repeat.
>
> > - Would you enable sqm on this connection? By doing so I miss out on the higher rate for the first few seconds.
>
> Yes, I would absolutely use SQM here.  It'll both iron out those latency spikes and reduce packet loss, and what's more it'll prevent congestion-related latency and loss from affecting any but the provoking flow(s).
>
> IMHO, the benefits of PowerBoost are illusory.  When you've got 100Mbps steady-state, tripling that for a short period is simply not perceptible in most applications.  Even Web browsing, which typically involves transfers smaller than the size of the bucket, is limited by latency not bandwidth, once you get above a couple of Mbps.  For a real performance benefit - for example, speeding up large software updates - bandwidth increases need to be available for minutes, not seconds, so that a gigabyte or more can be transferred at the higher speed.
>
> > What are the actual downsides of not enabling sqm in this case?
>
>
> Those latency spikes would be seen by latency-sensitive applications as jitter, which is one of the most insidious problems to cope with in a realtime interactive system.  They coincide with momentary spikes in packet loss (which unfortunately is not represented in dslreports' graphs) which are also difficult to cope with.
>
> That means your VoIP or videoconference session will glitch and drop out periodically, unless it has deliberately increased its own latency with internal buffering and redundant transmissions to compensate, if a bulk transfer is started up by some background application (Steam, Windows Update) or by someone else in your house (visiting niece bulk-uploads an SD card full of holiday photos to Instagram, wife cues up Netflix for the evening).
>
> That also means your online game session, under similar circumstances, will occasionally fail to show you an enemy move in time for you to react to it, or even delay or fail to register your own actions because the packets notifying the server of them were queued and/or lost.  Sure, 125ms is a far cry from the multiple seconds we often see, but it's still problematic for gamers - it corresponds to just 8Hz, when they're running their monitors at 144Hz and their mice at 1000Hz.
>
>  - Jonathan Morton
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619

  reply	other threads:[~2018-06-29 14:00 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-29 10:56 Jonas Mårtensson
2018-06-29 11:23 ` Sebastian Moeller
2018-06-30  6:26   ` Jonas Mårtensson
2018-06-30  7:20     ` Jonathan Morton
2018-06-30  7:46     ` Pete Heist
2018-06-30 11:22       ` Dave Taht
2018-07-01 21:49       ` Jonas Mårtensson
2018-07-04 20:25         ` Benjamin Cronce
2018-06-29 12:22 ` Jonathan Morton
2018-06-29 14:00   ` Dave Taht [this message]
2018-06-29 14:42     ` Sebastian Moeller
2018-06-29 15:45   ` Jonas Mårtensson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA93jw7cmzD30ZVNtTo407Jbx_whneoTS0fkY=JaB5dqLfxNtw@mail.gmail.com' \
    --to=dave.taht@gmail.com \
    --cc=bloat@lists.bufferbloat.net \
    --cc=chromatix99@gmail.com \
    --cc=martensson.jonas@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox