<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Jake,<br>
<br>
<div class="moz-cite-prefix">On 19/06/2019 05:24, Holland, Jake
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">Hi Bob and Luca,
Thank you both for this discussion, I think it helped crystallize a
comment I hadn't figured out how to make yet, but was bothering me.
I’m reading Luca’s question as asking about fixed-rate traffic that does
something like a cutoff or downshift if loss gets bad enough for long
enough, but is otherwise unresponsive.
The dualq draft does discuss unresponsive traffic in 3 of the sub-
sections in section 4, but there's a point that seems sort of swept
aside without comment in the analysis to me.
The referenced paper[1] from that section does examine the question
of sharing a link with unresponsive traffic in some detail, but the
analysis seems to bake in an assumption that there's a fixed amount
of unresponsive traffic, when in fact for a lot of the real-life
scenarios for unresponsive traffic (games, voice, and some of the
video conferencing) there's some app-level backpressure, in that
when the quality of experience goes low enough, the user (or a qoe
trigger in the app) will often change the traffic demand at a higher
layer than a congestion controller (by shutting off video, for
instance).
The reason I mention it is because it seems like unresponsive
traffic has an incentive to mark L4S and get low latency. It doesn't
hurt, since it's a fixed rate and not bandwidth-seeking, so it's
perfectly happy to massively underutilize the link. And until the
link gets overloaded it will no longer suffer delay when using the
low latency queue, whereas in the classic queue queuing delay provides
a noticeable degradation in the presence of competing traffic.</pre>
</blockquote>
It is very much intentional to allow unresponsive traffic in the L
queue if it is not contributing to queuing.<br>
<br>
You're right that the title of S.4.1.3 sounds like there's a
presumption that all unresponsive ECN traffic is bad. Sorry that was
not the intention. Elsewhere the drafts do say that a reasonable
amount of smoothly paced unresponsive traffic is OK alongside any
responsive traffic.<br>
<br>
(I've just posted an -09 rev, but I'll post a draft-10 that fixes
that, hopefully before the Monday cut-off).<br>
<br>
If you're talking about where unresponsive traffic is mentioned in
4.1.1, I think that's OK, 'cos that's in the context of saturated
congestion marking (when it's not OK to be unresponsive).<br>
<br>
<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
I didn't see anywhere in the paper that tried to check the quality
of experience for the UDP traffic as non-responsive traffic approached
saturation, except by inference that loss in the classic queue will
cause loss in the LL queue as well.</pre>
</blockquote>
Yeah, in the context of Henrik's thesis (your [1]), "unresponsive"
was used as a byword for "attack traffic". But that shouldn't be
taken to mean unresponsive is considered evil for L4S in general.<br>
<br>
Indeed, Low Latency DOCIS started from the assumption of using a low
latency queue for unresponsive traffic (games, VoIP, etc), then
added responsive L4S traffic into the same queue later. <br>
<br>
You may have seen the draft about assigning a DSCP for
Non-Queue-Building (NQB) traffic for that purpose (as with L4S and
unlike Diffserv, this codepoint solely describes the traffic's
behaviour, not what it wants or needs). <br>
<a class="moz-txt-link-freetext" href="https://tools.ietf.org/html/draft-white-tsvwg-nqb-02">https://tools.ietf.org/html/draft-white-tsvwg-nqb-02</a> <br>
And there are references in ecn-l4s-id to other identifiers that
could be used to get unresponsive traffic into the low latency queue
(DOCSIS classifies EF and NQB as low latency by default). <br>
<br>
We don't want ECN to be the only way to get into the L queue, cos we
don't want to encourage mismarking as 'ECN' when a flow is not
actually going to respond to ECN).<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
But letting unresponsive flows get away with pushing out more classic
traffic and removing the penalty that classic flows would give it seems
like a risk that would result in more use of this kind of unresponsive
traffic marking itself for the LL queue, since it just would get lower
latency almost up until overload.</pre>
</blockquote>
As explained to Luca, it's counter-intuitive, but responsive flows
(either C or L) use the same share of capacity irrespective of which
queue any unresponsive traffic is in. Think of it as the
unresponsive traffic subtracting capacity from the aggregate
(because both queues can use the whole aggregate), then the coupling
sharing out what's left. The coupling makes it like a FIFO from a
bandwidth perspective.<br>
<br>
You can try this with the tool you mentioned that you had
downloaded. There's a slider to add unresponsive traffic to either
queue.<br>
<br>
So it's fine if unresponsive traffic doesn't cause any queuing
itself. It can happily use the L queue. This was a very important
design goal, but we write about it circumspectly in the IETF drafts,
'cos talk about allowing unresponsive traffic can trigger political
correctness arguments. (Oops, am I writing on an IETF list?)<br>
<br>
Nonetheless, when an unresponsive flow(s) is consuming some
capacity, and a responsive flow(s) takes the total over the
available capacity, then both are responsible in proportion to their
contribution to the queue, 'cos the unresponsive flow didn't respond
(it didn't even try to).<br>
<br>
This is why it's OK to have a small unresponsive flow, but it
becomes less and less OK to have a larger and larger unresponsive
flow. <br>
<br>
BTW, the proportion of blame for the queue is what the queuing score
represents in the DOCSIS queue protection algo. It's quite simple
but subtle. See your PS at the end. Right now I'm going to get on
with writing about that in a proper doc, rather than in an email.<br>
<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
Many of the apps that send unresponsive traffic would benefit from low
latency and isolation from the classic traffic, so it seems a mistake
to claim there's no benefit, and it furthermore seems like there's
systematic pressures that would often push unresponsive apps into this
domain.</pre>
</blockquote>
There's no bandwidth benefit. <br>
There's only latency benefit, and then the only benefits are:<br>
<ul>
<li>the low latency behaviour of yourself and other flows behaving
like you<br>
</li>
<li>and, critically, isolation from those flows not behaving well
like you.</li>
</ul>
Neither give an incentive to mismark - you get nothing if you don't
behave. And there's a disincentive for 'Classic' TCP flows to
mismark, 'cos they badly underutilize without a queue.<br>
<br>
(See also reply to Luca addressing accidents and malice, which lie
outside control by incentives).<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
If that line of reasoning holds up, the "rather specific" phrase in
section 4.1.1 of the dualq draft might not turn out to be so specific
after all, and could be seen as downplaying the risks.</pre>
</blockquote>
Yup, as said, will fix the phrasing in 4.1.3. But I'm not going to
touch 4.1.1. without better understand what the problem is there.<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
Best regards,
Jake
[1] <a class="moz-txt-link-freetext" href="https://riteproject.files.wordpress.com/2018/07/thesis-henrste.pdf">https://riteproject.files.wordpress.com/2018/07/thesis-henrste.pdf</a>
PS: This seems like a consequence of the lack of access control on
setting ECT(1), and maybe the queue protection function would address
it, so that's interesting to hear about.</pre>
</blockquote>
Yeah, I'm trying to write about that next. But if you extract
Appendix P from the DOCSIS 3.1 spec it's explained pretty well
already and openly available.<br>
<br>
However, I want it to be clear that Q Prot is not /necessary/ for
L4S - and it's also got wider applicability, I think.<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
But I thought the whole point of dualq over fq was that fq state couldn't
scale properly in aggregating devices with enough expected flows sharing
a queue? If this protection feature turns out to be necessary, would that
advantage be gone? (Also: why would one want to turn this protection off
if it's available?)</pre>
</blockquote>
1/ The q-prot mechanism certainly has the disadvantage that it has
to access L4 headers. But it is much more lightweight than FQ.<br>
<br>
There's no queue state per flow. The flow-state is just a number
that represents its own expiry time - a higher queuing score pushes
out the expiry time further. If it has expired when the next packet
of the flow arrives, it just starts from now, like a new flow,
otherwise it adds to the existing expiry time. Long-running L4S
flows don't hold on to flow-state between most packets - it usually
expires reasonably early in the gap between the packets of a normal
flow, then it can be recycled for packets from any other flows that
arrive in between. So only misbehaving flows hold flow state
persistently. <br>
<br>
The subtle part is the queuing score. It uses the internal variable
from the AQM that drives the ECN marking probability - call it p
(between 0 and 1 in floating point). And it takes the size of each
arriving packet of a flow and scales by the value of p on arrival.
This would accumulate a number which would rise at the so-called
congestion-rate of the flow, i.e. the rate at which the flow is
causing congestion (the rate at which it is sending bytes that are
ECN marked or dropped).<br>
<br>
However, rather than just doing that, the queuing score is also
normalized into time units (to represent the expiry time of the flow
state, as above). That's possible by just dividing by a constant
that represents the acceptable congestion-rate per flow (rounded up
to an integer power of 2 for efficiency). A nice property of the
linear scaling of L4S is that this number is a constant for any link
rate. <br>
<br>
That's probably not understandable. Let me write it up properly -
with some explanatory pictures and examples.<br>
<br>
<br>
Bob<br>
<br>
<blockquote type="cite"
cite="mid:D13294C4-105C-4F58-A762-6911A21A18C6@akamai.com">
<pre class="moz-quote-pre" wrap="">
_______________________________________________
Ecn-sane mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Ecn-sane@lists.bufferbloat.net">Ecn-sane@lists.bufferbloat.net</a>
<a class="moz-txt-link-freetext" href="https://lists.bufferbloat.net/listinfo/ecn-sane">https://lists.bufferbloat.net/listinfo/ecn-sane</a>
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
________________________________________________________________
Bob Briscoe <a class="moz-txt-link-freetext" href="http://bobbriscoe.net/">http://bobbriscoe.net/</a></pre>
</body>
</html>