From: Jonathan Morton <chromatix99@gmail.com>
To: Ruediger.Geib@telekom.de
Cc: tcpm@ietf.org, ecn-sane@lists.bufferbloat.net, tsvwg@ietf.org
Subject: Re: [Ecn-sane] [tsvwg] ECN CE that was ECT(0) incorrectly classified as L4S
Date: Mon, 5 Aug 2019 13:59:47 +0300 [thread overview]
Message-ID: <0CD0C15C-2461-4B0E-AFD1-947BA6E6212B@gmail.com> (raw)
In-Reply-To: <LEXPR01MB046306842E5AB407A7BFC6619CDA0@LEXPR01MB0463.DEUPRD01.PROD.OUTLOOK.DE>
> [JM] A progressive narrowing of effective link capacity is very common in consumer Internet access. Theoretically you can set up a chain of almost unlimited length of consecutively narrowing bottlenecks, such that a line-rate burst injected at the wide end will experience queuing at every intermediate node. In practice you can expect typically three or more potentially narrowing points:
>
> [RG] deleted. Please read https://tools.ietf.org/html/rfc5127#page-3 , first two sentences. That's a sound starting point, and I don't think much has changed since 2005.
As I said, that reference is *usually* true for *responsible* ISPs. Not all ISPs, however, are responsible vis a vis their subscribers, as opposed to their shareholders. There have been some high-profile incidents of *deliberately* inadequate peering arrangements in the USA (often involving Netflix vs major cable networks, for example), and consumer ISPs in the UK *typically* have diurnal cycles of general congestion due to under-investment in the high-speed segments of their network.
To say nothing of what goes on in Asia Minor and Africa, where demand routinely far outstrips supply. In those areas, solutions to make the best use of limited capacity would doubtless be welcomed.
> [RG] About the bursts to expect, it's probably worth noting that today's most popular application generating traffic bursts is watching video clips streamed over the Internet. Viewers dislike the movies to stall. My impression is, all major CDNs are aware of that and try their best to avoid this situation. In particular, I don't expect streaming bursts to overwhelm access link shaper buffers by design. And that, I think, limits burst sizes of the majority of traffic.
In my personal experience with YouTube, to pick a major video streaming service not-at-all at random, the bursts last several seconds and are essentially ack-clocked. It's just a high/low watermark system in the receiving client's buffer; when it's full, it tells the server to stop sending, and after it drains a bit it tells the server to start again. When traffic is flowing, it's no different from any other bulk flow (aside from the use of BBR instead of CUBIC or Reno) and can be managed in the same way.
The timescale I'm talking about, on the other hand, is sub-RTT. Packet intervals may be counted in microseconds at origin, then gradually spaced out into the millisecond range as they traverse the successive bottlenecks en route. As I mentioned, there are several circumstances when today's servers emit line-rate bursts of traffic; these can also result from aggregation in certain link types (particularly wifi), and hardware offload engines which try to treat multiple physical packets from the same flow as one. This then results in transient queuing delays as the next bottleneck spaces them out again.
When several such bursts coincide at a single bottleneck, moreover, the queuing required to accommodate them may be as much as their sum. This "incast effect" is particularly relevant in datacentres, which routinely produce synchronised bursts of traffic as responses to distributed queries, but can also occur in ordinary web traffic when multiple servers are involved in a single page load. IW10 does not mean you only need 10 packets of buffer space, and many CDNs are in fact using even larger IWs as well.
These effects really do exist; we have measured them in the real world, reproduced them in lab conditions, and designed qdiscs to accommodate them as cleanly as possible. The question is to what extent they are relevant to the design of a particular technology or deployment; some will be much more sensitive than others. The only way to be sure of the answer is to be aware, and do the appropriate maths.
> [RG] Other people use their equipment to communicate and play games
These are examples of traffic that would be sensitive to the delay from transient queuing caused by other traffic. The most robust answer here is to implement FQ at each such queue. Other solutions may also exist.
> Any solution for Best Effort service which is TCP friendly and support scommunication expecting no congestion at the same time should be easy to deploy and come with obvious benefits.
Well, obviously. Although not everyone remembers this at design time.
> [RG] I found Sebastian's response sound. I think, there are people interested in avoiding congestion at their access.
> the access link is the bottleneck, that's what's to be expected.
It is typically *a* bottleneck, but there can be more than one from the viewpoint of a line-rate burst.
> [RG] I'd like to repeat again what's important to me: no corner case engineering. Is there something to be added to Sebastian's scenario?
He makes an essentially similar point to mine, from a different perspective. Hopefully the additional context provided above is enlightening.
- Jonathan Morton
next prev parent reply other threads:[~2019-08-05 10:59 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-13 16:48 [Ecn-sane] " Bob Briscoe
2019-07-09 14:41 ` [Ecn-sane] [tsvwg] " Black, David
2019-07-09 15:32 ` [Ecn-sane] [tcpm] " Neal Cardwell
2019-07-09 15:41 ` [Ecn-sane] " Jonathan Morton
2019-07-09 23:08 ` [Ecn-sane] [tsvwg] " Yuchung Cheng
2019-08-02 8:29 ` Ruediger.Geib
2019-08-02 9:47 ` Jonathan Morton
2019-08-02 11:10 ` Dave Taht
2019-08-02 12:05 ` Dave Taht
2019-08-05 9:35 ` Ruediger.Geib
2019-08-05 10:59 ` Jonathan Morton [this message]
2019-08-05 12:16 ` Ruediger.Geib
2019-08-05 15:55 ` Jonathan Morton
2019-08-02 13:15 ` Sebastian Moeller
2019-08-05 7:26 ` Ruediger.Geib
2019-08-05 11:00 ` Sebastian Moeller
2019-08-05 11:47 ` Ruediger.Geib
2019-08-05 13:47 ` Sebastian Moeller
2019-08-06 9:49 ` Mikael Abrahamsson
2019-08-06 14:34 ` Ruediger.Geib
2019-08-06 15:27 ` Jonathan Morton
2019-08-06 15:35 ` Dave Taht
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/ecn-sane.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0CD0C15C-2461-4B0E-AFD1-947BA6E6212B@gmail.com \
--to=chromatix99@gmail.com \
--cc=Ruediger.Geib@telekom.de \
--cc=ecn-sane@lists.bufferbloat.net \
--cc=tcpm@ietf.org \
--cc=tsvwg@ietf.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox