revolutions per minute - a new metric for measuring responsiveness
 help / color / mirror / Atom feed
From: Matt Mathis <mattmathis@google.com>
To: Stuart Cheshire <cheshire@apple.com>
Cc: Rpm <rpm@lists.bufferbloat.net>
Subject: Re: [Rpm] Outch! I found a problem with responsiveness
Date: Tue, 5 Oct 2021 15:01:20 -0700	[thread overview]
Message-ID: <CAH56bmDOg8n0Nv1iAVoWmMyHmuPT2Sp=EXL9JqXO78ryAfeWwg@mail.gmail.com> (raw)
In-Reply-To: <49EE1D58-AF0F-4748-83D3-784B0D5F35EF@apple.com>

[-- Attachment #1: Type: text/plain, Size: 3898 bytes --]

What you say is correct for effectively infinite bulk transfers.    I was
talking about transactional data such as web pages.    These days most
video (including many VC systems) are paced transactions.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
       however our response must be carefully measured:
            too strong would be hypocritical and risks spiraling out of
control;
            too weak risks being mistaken for tacit approval.


On Tue, Oct 5, 2021 at 10:26 AM Stuart Cheshire <cheshire@apple.com> wrote:

> On 4 Oct 2021, at 16:23, Matt Mathis via Rpm <rpm@lists.bufferbloat.net>
> wrote:
>
> > It has a super Heisenberg problem, to the point where it  is unlikely to
> have much predictive value under conditions that are different from the
> measurement itself.    The problem comes from the unbound specification for
> "under load" and the impact of the varying drop/mark rate changing the
> number of rounds needed to complete a transaction, such as a page load.
> >
> > For modern TCP on an otherwise unloaded link with any minimally correct
> queue management (including drop tail), the page load time is insensitive
> to the details of the queue management.    There will be a little bit of
> link idle in the first few RTT (early slowstart), and then under a huge
> range of conditions for both the web page and the AQM, TCP will maintain at
> least a short queue at the bottleneck
>
> Surely you mean: TCP will maintain an EVER GROWING queue at the
> bottleneck? (Of course, the congestion control algorithm in use affects the
> precise nature of queue growth here. For simplicity here I’m assuming Reno
> or CUBIC.)
>
> > TCP will also avoid sending any duplicate data, so the total data sent
> will be determined by the total number of bytes in the page, and the total
> elapsed time, by the page size and link rate (plus the idle from startup).
>
> You are focusing on time-to-completion for a flow. For clicking “send” on
> an email, this is a useful metric. For watching a two-hour movie, served as
> a single large HTTP GET for the entire media file, and playing it as it
> arrives, time-to-completion is not very interesting. What matters is
> consistent smooth delivery of the bytes within that flow, so the video can
> be played as it arrives. And if I get bored of that video and click
> another, the the amount of (now unwanted) stale packets sitting in the
> bottleneck queue is what limits how quickly I get to see the new video
> start playing.
>
> > If AQM is used to increase the responsiveness, the losses or ECN marks
> will cause the browser to take additional RTTs to load the page.  If there
> is no cross traffic, these two effects (more rounds at higher RPM) will
> exactly counterbalance each other.
>
> Right: Improving responsiveness has *no* downside on time-to-completion
> for a flow. Throughput -- in bytes per second -- is unchanged. What
> improving responsiveness does is improve what happens throughout the
> lifetime of the transfer, without affecting the end time either for better
> or for worse.
>
> > This is perhaps why there are BB deniers: for many simple tasks it has
> zero impact.
>
> Of course. In the development of any technology we solve the most obvious
> problems first, and the less obvious ones later.
>
> If there was a bug that occasionally resulted in a corrupted file system
> and loss of data, would people argue that we shouldn’t fix it on the
> grounds that sometimes it *doesn’t* corrupt the file system?
>
> If you car brakes didn’t work, would people argue that it doesn’t matter,
> because -- statistically speaking -- the brake pedal is depressed for only
> a tiny percentage of overall the time you spend driving?
>
> Stuart Cheshire
>
>

[-- Attachment #2: Type: text/html, Size: 4478 bytes --]

      reply	other threads:[~2021-10-05 22:01 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 23:23 Matt Mathis
2021-10-04 23:36 ` [Rpm] RPM open meeting tuesdays 9:30-10:30 Dave Taht
2021-10-05 15:47   ` Matt Mathis
2021-10-11 20:52   ` Christoph Paasch
2021-10-05 16:18 ` [Rpm] Outch! I found a problem with responsiveness Christoph Paasch
2021-10-05 21:43   ` Simon Leinen
2021-10-11 21:01     ` Christoph Paasch
2021-10-12  7:11       ` Sebastian Moeller
2021-10-05 17:26 ` Stuart Cheshire
2021-10-05 22:01   ` Matt Mathis [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/rpm.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAH56bmDOg8n0Nv1iAVoWmMyHmuPT2Sp=EXL9JqXO78ryAfeWwg@mail.gmail.com' \
    --to=mattmathis@google.com \
    --cc=cheshire@apple.com \
    --cc=rpm@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox