Bob,

 

Commenting as an individual, not a WG chair.

 

> Q#1: If this glosses over any concerns you have, please explain.

 

It does gloss over, at least for me.  The TL;DR summary is that items 1-3 aren’t relevant or helpful, IMHO, leaving items 4 and 5, whose effectiveness depends on widespread deployment of RACK and FQ AQMs (e.g., FQ-CoDel) respectively.

 

Items 1 & 2: The general expectation for Internet transport protocols is that they’re robust against “stupid network tricks” like reordering, but existing protocols transport wind up being designed/implemented for the network we have, not the one we wish we had.  I’m generally skeptical of “highly unlikely” arguments, as horrendous results in a highly unlikely scenario are not acceptable if that scenario occurs repeatedly, even with long intervals in between occurrences.  In light of that, I view items 1 and 2 as defining the problem scenario that needs to be addressed, particularly if L4S is to be widely deployed, and prefer to focus on items 3-5 about how the problem is dealt with.

 

Item 3: This begins by correctly points out that 3DupACK is the criteria for triggering conventional TCP retransmission, e.g., 2DupACK doesn’t.  An aspect that isn’t mentioned is that AQMs for classic (non-L4S) traffic should be randomly marking (above a queue threshold, CE marking probability depends on queue occupancy), not threshold marking (above a queue threshold, mark all packets with CE).  If threshold marking is used, 3 CE marks in a row is a near certainty, as for non-mice flows, one can expect to have at least that many packets in an RTT window; this is a “Doctor it hurts when I do <this>.”/”Don’t do that!” scenario where the right answer is to fix the broken threshold marking implementation.

 

Assuming probabilistic marking, one then needs to look at 3-in-a-row CE marking probabilities based on the marking rate.  These are not small - for example, at a 10% marking probability, the likelihood of CE-marking 3 packets in a row starting from a specific packet is 1 in 1,000 (1/10th of 1%), but across 500 packets in a flow, that probability is about 50%.   My initial take-away from this is that if the two bottlenecks (conventional followed by L4S) persist, then the “unusual scenario” of 3 CE-marked packets in a row is nearly certain to happen, which suggests that item 3 is not particularly helpful, leaving items 4 (RACK) and 5 (FQ-CoDel).

 

So, while I don’t have a conclusion to draw, it appears to me that the countermeasures to this conventional TCP flow misbehavior with L4S are deployment of RACK at endpoints and deployment of FQ AQMs such as FQ-CoDel at non-L4S potential bottleneck nodes.  Items 4 and 5 below effectively assert wide deployment of those algorithms – additional information and data on that would be of interest.

 

Thanks, --David

 

From: tsvwg <tsvwg-bounces@ietf.org> On Behalf Of Bob Briscoe
Sent: Thursday, June 13, 2019 12:48 PM
To: ecn-sane@lists.bufferbloat.net; tcpm IETF list
Cc: tsvwg IETF list
Subject: [tsvwg] ECN CE that was ECT(0) incorrectly classified as L4S

 

[EXTERNAL EMAIL]


[I'm sending this to ecn-sane 'cos that's where I detect that this concern is still rumbling.
I'm also sending to tcpm@ietf 'cos there's a question for TCP experts just before the quoted text below.
And tsvwg@ietf is where it ought to be discussed.]

Now that the IPR issue with L4S has been put to bed, one by one I am going through the other concerns that have been raised about L4S.

In the IETF draft that records all the pros and cons of different identifiers to use for L4S, under the "ECT(1) and CE" choice (which is currently the one adopted at the IETF) there was already an explanation of why there would be vanishingly low risk of any harmful consequences from CE that was originally ECT(0) being classified into the L4S queue:
https://tools.ietf.org/html/draft-ietf-tsvwg-ecn-l4s-id-06#page-32

Re-reading that, I have found some things unstated that I had thought were obvious. So I've spelled it all out long-hand in the text below, which is now in my local copy of the draft and will be in the next revision unless people suggest improvements/corrections here.

Q#1: If this glosses over any concerns you have, please explain.
Otherwise I will continue to consider that this is effectively a non-issue, which is the conclusion everyone in the TCP community came to at the time the L4S identifier was chosen back in 2015.

Q#2: The last couple of lines are the only part I am not sure of. Do most of today's TCP implementations recover the reduction in congestion window when they discover later that a fast retransmit was spurious? There's a note at the end of the intro to rfc4015 saying there was insufficient consensus to standardize this behaviour, but that most likely means it's done in different ways, rather than it isn't done at all.


Bob


======================================

   Risk of reordering classic CE packets:  Classifying all CE packets
      into the L4S queue risks any CE packets that were originally
      ECT(0) being incorrectly classified as L4S.  If there were delay
      in the Classic queue, these incorrectly classified CE packets
      would arrive early, which is a form of reordering.  Reordering can
      cause TCP senders (and senders of similar transports) to
      retransmit spuriously.  However, the risk of spurious
      retransmissions would be extremely low for the following reasons:
 
      1.  It is quite unusual to experience queuing at more than one
          bottleneck on the same path (the available capacities have to
          be identical).
 
      2.  In only a subset of these unusual cases would the first
          bottleneck support classic ECN marking while the second
          supported L4S ECN marking, which would be the only scenario
          where some ECT(0) packets could be CE marked by a non-L4S AQM
          then the remainder experienced further delay through the
          Classic side of a subsequent L4S DualQ AQM.
 
      3.  Even then, when a few packets are delivered early, it takes
          very unusual conditions to cause a spurious retransmission, in
          contrast to when some packets are delivered late.  The first
          bottleneck has to apply CE-marks to at least N contiguous
          packets and the second bottleneck has to inject an
          uninterrupted sequence of at least N of these packets between
          two packets earlier in the stream (where N is the reordering
          window that the transport protocol allows before it considers
          a packet is lost).
 
             For example consider N=3, and consider the sequence of
             packets 100, 101, 102, 103,... and imagine that packets
             150,151,152 from later in the flow are injected as follows:
             100, 150, 151, 101, 152, 102, 103...  If this were late
             reordering, even one packet arriving 50 out of sequence
             would trigger a spurious retransmission, but there is no
             spurious retransmission here, because packet 101 moves the
             cumulative ACK counter forward before 3 packets have
             arrived out of order.  Later, when packets 148, 149, 153...
             arrive, even though there is a 3-packet hole, there will be
             no problem, because the packets to fill the hole are
             already in the receive buffer.
 
      4.  Even with the current recommended TCP (N=3) spurious
          retransmissions will be unlikely for all the above reasons.
          As RACK [I-D.ietf-tcpm-rack] is becoming widely deployed, it
          tends to adapt its reordering window to a larger value of N,
          which will make the chance of a contiguous sequence of N early
          arrivals vanishingly small.
 
      5.  Even a run of 2 CE marks within a classic ECN flow is
          unlikely, given FQ-CoDel is the only known widely deployed AQM
          that supports classic ECN marking and it takes great care to
          separate out flows and to space any markings evenly along each
          flow.
 
      It is extremely unlikely that the above set of 5 eventualities
      that are each unusual in themselves would all happen
      simultaneously.  But, even if they did, the consequences would
      hardly be dire: the odd spurious fast retransmission.  Admittedly
      TCP reduces its congestion window when it deems there has been a
      loss, but even this can be recovered once the sender detects that
      the retransmission was spurious.





-- 
________________________________________________________________
Bob Briscoe                               http://bobbriscoe.net/