[Ecn-sane] [tsvwg] Comments on L4S drafts
Bob Briscoe
ietf at bobbriscoe.net
Wed Jun 19 08:59:07 EDT 2019
Jake & Ingemar,
On 16/06/2019 11:07, Ingemar Johansson S wrote:
> Hi Jake + all
>
> Please see inline
>
> /Ingemar
>
>
>> -----Original Message-----
>> From: Holland, Jake<jholland at akamai.com>
>> Sent: den 14 juni 2019 20:28
>> To: Ingemar Johansson S<ingemar.s.johansson at ericsson.com>; Bob Briscoe
>> <ietf at bobbriscoe.net>
>> Cc:tsvwg at ietf.org
>> Subject: Re: [tsvwg] Comments on L4S drafts
>>
>> Hi Ingemar,
>> (bcc: ecn-sane, to keep them apprised on the discussion).
>>
>> Thanks for chiming in on this. A few comments inline:
>>
>> On 2019-06-08, 12:46, "Ingemar Johansson S"
>> <ingemar.s.johansson at ericsson.com> wrote:
>>> Up until now it has been quite a challenge to make ECN happen, I
>>> believe that part of the reason has been that ECN is not judged to
>>> give a large enough gain.
>> Could you elaborate on this point?
>>
>> I haven't been sure how to think about the claims in the l4s drafts that operators
>> will deploy it rapidly because of performance.
>>
>> Based on past analyses (e.g. the classic ECN rollout case study in RFC
>> 8170 [1]), I thought network operators had a very "safety first" outlook on these
>> things, and that rapid deployment for performance benefits seemed like wishful
>> thinking.
[BB] The ECN rollout case study in rfc8170 is not a useful example . It
ends hoping there will be some client roll-out (written before Apple's
decision) and doesn't even mention that network roll-out would still be
needed subsequently. So it gives no insight into what causes network
operator resistance
>> But I'd be interested to know more about why that view might be mistaken.
> [IJ] I believe that it is easy to end up in a lot of speculation. I don't believe that the safety first thinking makes much sense, yes it has sometimes been used as a counter argument. Part of the problem is perhaps that ECN was introduced into 3GPP for VoLTE. And then when ECN is proposed for its original use in 3GPP (=Generic transport protocol agnostic feature) it gets hard to make it stick. With that said, ECN is supported in both LTE and NR standards (TS36.300, TS38.300). It is however rarely deployed. One could speculate around the reasons, I believe that one big reason can be that traditional ECN does not show a large enough delta improvement to make it worthwhile. I can of course be wrong , I don't possess a crystal ball 😊
[BB] My experience on this comes from years inside BT. The last 15 were
after ECN was standardized, and for the last few years I was in BT's
tech strategy team, regularly making business cases for various
improvements. And talking with folks from other operators, of course.
When I quantified the performance benefit of classic ECN, it was
embarrassing. You only got significant benefits in an under-provisioned
network, which most operators avoid for obvious other reasons. Classic
ECN gave next-to-no benefit with long-running flows. The more
significant benefit for short transactional flows was primarily due to
avoiding the timeout when the last packet of a flow was dropped. I
figured that could be solved e2e, and indeed in 2012 the tail loss probe
was proposed to solve that problem. The remaining benefit was mostly due
to not losing SYNs and to a lesser extent SYN/ACKs, but classic ECN
couldn't be used on SYNs anyway. In comparison the potential risks on
the cost side dominated.
Finally, for a large network improvement project it is nearly impossible
to squeeze the cash needed out of the relatively small budgets assigned
for regular network improvements. None of the access equipment supported
a modern AQM or ECN, so we would have had to tender for new designs. To
persuade vendors to spend that sort of money, you need a budget line in
a project that is buying kit for a new service with a projected revenue
stream (e.g. a new sports service, a VR product, etc). That means your
performance improvement has to be necessary for that product.
An alternative would have beeen to show that the performance improvement
would gain sales from competing ISPs for long enough to pay for the
costs of the improvement, but that's much harder to argue convincingly.
>>> Besides this, L4S has the nice
>>> property that it has potential to allow for faster rate increase when
>>> link capacity increases.
>> I think section 3.4 of RFC 8257 says the rate increase would be the
>> same:
>> https://tools.ietf.org/html/rfc8257#section-3.4
>> A DCTCP sender grows its congestion window in the same way as
>> conventional TCP.
>>
>> I guess this is referring to the paced chirping for rapid growth idea presented last
>> time?
>> https://datatracker.ietf.org/meeting/104/materials/slides-104-iccrg-
>> implementing-the-prague-requirements-in-tcp-for-l4s-01#page=20
>>
>> I'm a little unclear on how safe this can be made, but I agree it seems useful if it
>> can work well.
> [IJ] Yes DCTCP use traditional additive increase. I have personally done a few experiments in this area, nothing that is good enough to show as the experiments were very limited. One possible idea can be to make the bandwidth probing in BBR(v2) more aggressive. And there may also be possibilities with Paced chirping too
[BB] Note that paced chirping is not the differentiator here. It doesn't
depend on ECN, nor L4S-ECN, nor SCE-ECN for that matter. It is
delay-based, and potentially applicable to any e2e technology.
The differentiator that L4S provides (and perhaps SCE if all the
problems were fixed) is the introduction of scalable congestion control
(like DCTCP), which induces a frequent amount of signalling per RTT that
remains invariant as flow rate scales.
Support for a transition to scalable CC is as important as cutting
latency. Aside from being able to scale flow rate indefinitely,...
...it also solves the problem of rapidly detecting when more capacity
has become available. If you normally get 2 signals per RTT (like
DCTCP), you can tell there's available capacity after 2 or 3 RTT. If you
get 1 signal every few hundred RTTs (like Cubic), you cannot tell
there's available capacity for a thousand or so RTTs. That in itself is
useful.... You don't need to do the seeking with paced chirping, which
is just one attempt to get up to capacity both with less overshoot and
faster.
>> Do you think the L4S benefits will still be sufficient if this point about faster
>> growth doesn't hold up (and/or could be replicated regardless of L4S), or is it
>> critical to providing sufficient benefit in 3GPP?
> [IJ] No, I don't believe that it is critical, it is definitely a welcome bonus if it is possible.
>
>> (Note: I'm not taking a position on this point, just asking about how much this
>> point matters to the 3GPP support, as you see it.)
[BB] Ingemar has described the New Radio meetings to me where he's tried
to propose ECN in the RLC layer. Like the swathes of other proposals, he
was given 2 minutes, to persuade primarily radio people, who have
already seen all the work that went into ECN for VoLTE not being taken up.
5G has promised extremely low latency. It is currently planning to do
that with 'old school' QoS - by limiting throughput into reserved
capacity. But that doesn't scale to apps that want high bandwidth and
low latency. That's when the NR working group will start listening more
carefully to ECN-based solutions.
>>> I see many applications that can benefit greatly from L4S, besides
>>> AR/VR, there is also an increased interest in the deployment of remote
>>> control capabilities for vehicles such as cars, trucks and drones, all
>>> of which require low latency video streaming.
>> Remote control over the internet instead of a direct radio link is an interesting
>> use case. Do you happen to know the research about delay parameters that
>> make the difference between viable or not viable for RC?
>> This touches on one of the reasons I've been skeptical that the benefits will drive
>> a rapid deployment--in most of the use cases I've come up with, it seems like
>> reducing delay from ~200-500ms down to ~15-30ms (as seems achievable even
>> for single queue with classic AQM) would give almost all the same benefits as
>> reducing from ~15-30ms down to 1ms.
> [IJ] The thing I like with L4S is that it reduces standing queues down to almost zero, which gives a very fast reaction time when throughput drops. In addition L4S gives frequent signals of congestion, which makes it easier for a congestion control algorithm to know when it is close to the congestion knee.
[BB] I did some research on motion-to-photon latency a while ago, with
others. It was for VR/AR, but it translates to similar apps. Quoting:
MTP Latency: AR/VR developers generally agree that MTP latency
becomes imperceptible below about 20 ms [Carmack13 <https://tools.ietf.org/html/draft-han-iccrg-arvr-transport-problem-01#ref-Carmack13>]. However,
some research has concluded that MTP latency must be less than
17ms for sensitive users [MTP-Latency-NASA <https://tools.ietf.org/html/draft-han-iccrg-arvr-transport-problem-01#ref-MTP-Latency-NASA>]. Experience has shown
that standards bodies tend to set demanding quality levels, while
motivated humans often happily adapt to lower quality although
they struggle with more demanding tasks. Therefore, we must be
clear that this 20 ms requirement is designed to enable immersive
interaction for the same wide range of tasks that people are used
to undertaking locally.
...
For a summary of numerous references
concerning the limit of human perception of delay see the thesis of
Raaen [Raaen16 <https://tools.ietf.org/html/draft-han-iccrg-arvr-transport-problem-01#ref-Raaen16>].
Let's say 20ms is too pedantic and you've got 50ms round trip MTP budget
(John Carmack says that 50ms feels responsive, but the slight lag is
still subtly unnatural, and merely defers the onset of VR-sickness).
We projected [latency budget
<https://tools.ietf.org/html/draft-han-iccrg-arvr-transport-problem-01#appendix-A.1.2>]
that, with some expected advances, it should possible to get the total
of all delays except propagation and queuing down to about 13ms.
If one subtracts the delays you just stated for queuing, you get the
following left for propagation:
target for 'responsive' non-network
queuing
left for 2-way propagation
reach in fibre
2nd gen. AQM 50ms
- 13ms
- 30ms
= 7ms
700km (440miles)
3rd gen. AQM 50ms
- 13ms
- 1ms
= 36ms
3600km (2250miles)
5 times greater reach means responsive interaction between Los Angeles
and Atlanta, rather than just Los Angeles and Phoenix.
For communicating with a data centre, 5 times greater reach means
equivalent coverage from 25 times fewer sites (coverage area is the
square of reach). Concentration of sites is surely a very important cost
factor for a CDN.
Note that, for real-time comms you need to watch the 99 or 99.9
percentile, not just median. See these percentiles on a log-scale at
slide 24 here:
https://www.files.netdevconf.org/f/4ebdcdd6f94547ad8b77/?dl=1
This was under rather extreme load (600 web sessions per second - see
slide for details).
Whatever, @Jake, I think you will agree that SCE's aim is to cut queuing
to similarly low levels. So arguing that we don't need such low delay
also argues against SCE.
>> Of course, there's a difference in that last 14-29ms, but for instance for gaming
>> reaction time it's well under the thresholds that make a difference for humans
>> (the low end of which is at 45ms, according to [2]), so it seems like the value in
>> that market would be captured by classic ECN, and therefore since classic ECN
>> deployment hasn't caught on yet, I had to conclude that the performance gains
>> to enable that market aren't sufficient to drive wide adoption.
>>
>> So I'm curious to know more about the use cases that get over that hump from
>> an operator's point of view, and what you've seen that leads you to believe the
>> additional gains of L4S from will make the difference on those use cases where
>> classic ECN wasn't adequate.
> [IJ] I guess for this part, there need to be more input from operators
[BB] When Kjetil [2] says 45ms is good enough for today's games, I'd
trust that. But you can't burn all that with queuing - if I had aimed
for 45ms not 50 ms above, I'd have been left with 2ms for propagation.
When I showed Kjetil the demo of L4S using finger-gestures to pan and
zoom cloud-rendered video, he agreed that humans are much more sensitive
to the lag that the eye sees between their real hand controlling a
movement and seeing the thing move under their hand. It depends how much
freedom we want to give game developers to explore new user interfaces
and delivery platforms (e.g. a Wii interacting with cloud-rendering).
>
>>> My bottomline is that I believe L4S provides with a clear benefit that
>>> is large enough to be more widely accepted in 3GPP. SCE is as I see it
>>> more like something that is just a minor enhancement to ECN and is therefore
>> much
>>> harder to sell in to 3GPP.
>> Thanks, this is good to know.
>>
>> To me one benefit of SCE over L4S is that it seems safer to avoid relying on an
>> ambiguous signal (namely a CE that we don't know which kind of AQM set it) in a
>> control system, while still providing high- fidelity info about the network device
>> congestion, where available.
>>
>> I agree that it's not completely clear exactly how the congestion controllers can
>> capitalize on that info, but to me it still seems worth considering.
>>
>> So although I'll support L4S if it really covers all the safety issues and performs
>> better, I'd be more comfortable with the signaling if there's a way to make SCE
>> do the same job, especially if the endpoint implementation is simpler to get
>> robustly deployed.
[BB] How much is this a case of, "There aren't any problems with the SCE
endpoint because we haven't thought about the problems yet"?
As well as the straightforward engineering showstoppers that I have
highlighted (which I'll repeat 1-by-1 in later emails), there's also
algorithmic stuff that hasn't even been identified yet in SCE, let alone
addressed theoretically, let alone implemented.
For instance, a shift to fine-grained signals also shifts the smoothing
from the network to the sender. That means the sender has to smooth the
SCE signal and not the CE. So you have to deal with the cases where the
two controllers interact and one overtakes the other. I don't believe
stability is understood in such a system (you can be pessimistic when
slowing down, but you also have to ensure stability when speeding up).
It's not as if SCE can just ride on the back of the CC research we and
others have already done - it also introduces its own new research
problems.
>>
>> So really, I'm hoping for a bakeoff to decide this, because one of my concerns is
>> that L4S still doesn't have an implementation that does all the things the drafts
>> say are needed for safety on the internet, even though the initial proof of
>> concept demoing the performance gains was presented 7 years ago.
[BB] It was Jul 2015 (nearly 4 years ago, not 7).
>> It's good
>> that it's getting closer, but the long implementation cycle (which still doesn't
>> have all the features required by the drafts) is a concern for me from the
>> "running code" point of view.
[BB] The SCE endpoint will need all the features required by the drafts
as well. Unless it is going to solely require FQ.
I should also add that we (the L4S proponents) never envisaged that we
would have to do all the endpoint stuff. We were all from
network-focused companies. Altho we all had background in congestion
control for video, we weren't allowed/expected to do such work on
company time.
What we didn't realize was that researchers aren't getting funding to do
such work these days (those that haven't been collected by Google). So
we eventually had to grasp the nettle and find ways to do the endpoint
stuff ourselves.
For instance, on a personal note, CableLabs is only funding a fraction
of my working week, and will only pay for time on tasks in my contract,
which are nearly all about the network aspects. I am self-funding nearly
all work I do on end-system stuff.
>>
>> On this point of view, it's possible that a parallel track might get further faster,
>> especially if it doesn't need the same special cases to be safe, which is part of
>> why I've been tentatively supportive.
[BB] Let me first see if I can get the SCE proponents to address the
show-stoppers that I have highlighted. By remaining silent, they seem to
have convinced everyone that these show-stoppers don't exist.
If necessary, it sounds like it would help to address the only
outstanding concern with L4S (classic ECN fall-back), irrespective of
whether we think the problem actually exists or will ever exist.
>>
>> And although I can see how the queue classification is a major issue that could
>> make the difference, especially with the very promising dualq proposal, it also
>> seems true that in addition to CPEs, there are promising avenues for carrier-
>> scale FQ systems (e.g [3], [4]) that could solve that. It makes me think that even
>> if SCE only gets low-latency with FQ and otherwise causes no harm, it's not clear
>> it'll be a slower path to ubiquitous deployment (and by the way, this approach
>> also would handle the opt-in access control problem).
[BB] You (@Jake) are right to point out that different people have
different ideas of what they think might happen in the future. However,
I think it is a bit of a stretch to imagine that ubiquitous deployment
of FQ might happen...
FQ assumes L4 headers are accessible, which assumes the Internet is an
unencrypted L3 network. In 4G and 5G the eNodeB or gNodeB where ECN
would need to be marked is a L2 node. A node deeper into the network has
already compressed, tunnelled and encapsulated the IP headers. So how
would FQ here access L4 port numbers? it can't do the cake trick -
creating an artificial bottleneck where the IP header is accessible,
because this concerns radio capacity, which varies hugely and continually.
Not to mention... all my other unanswered points about where SCE doesn't
work at all, e.g.
* all tunnels will have to propagate the ECT(1) codepoint, when the
spec saying this isn't even out of WGLC yet,
* and the optional TCP option for AccECN will be needed to feed back
ECT(1), when no major OS is going to implement the TCP option,
because they don't want to handle all the pain of middlebox mangling,
* and... <my other 3 points that I'll get to in later emails>
If the last unicorn goes to a solution that will rarely work, and
becomes renowned as ineffective and unconvincing, we will have wasted
the last unicorn.
>> Of course, this will presumably collapse to one answer at some point, but I'll
>> argue that it's worthwhile to give a good look to the alternate proposal...
>>
>> Anyway, thanks for the comments, I think it's good to see more discussion on
>> this.
[BB] Having alternative(s) is v important, even if strawmen. Proper
discussion is good too - I've been close enough to this that I can
identify problems v quickly, but the wider community needs discussion
time to get steeped in it all.
Thank you v much for all the time you're putting into this.
Cheers
Bob
>>
>> Best regards,
>> Jake
>>
>> [1] Appendix A.1 RFC 8170https://tools.ietf.org/html/rfc8170#appendix-A.1
>> [2]https://protect2.fireeye.com/url?k=997ee527-c5f43093-997ea5bc-
>> 866a015dd3d5-
>> 1d25c70963170b1e&q=1&u=https%3A%2F%2Fojs.bibsys.no%2Findex.php%2FNI
>> K%2Farticle%2Fview%2F9 says 45ms [3]http://ppv.elte.hu/ [4]
>> https://ieeexplore.ieee.org/document/8419697
>>
--
________________________________________________________________
Bob Briscoehttp://bobbriscoe.net/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/ecn-sane/attachments/20190619/fe1c3ba2/attachment-0001.html>
More information about the Ecn-sane
mailing list