On Fri, Jun 7, 2019 at 8:10 PM Bob Briscoe <ietf@bobbriscoe.net> wrote:

 I'm afraid there are not the same pressures to cause rapid roll-out at all, cos it's flakey now, jam tomorrow. (Actually ECN-DualQ-SCE has a much greater problem - complete starvation of SCE flows - but we'll come on to that in Q4.)

I want to say at this point, that I really appreciate all the effort you've been putting in, trying to find common ground.

In trying to find a compromise, you've taken the fire that is really aimed at the inadequacy of underlying SCE protocol - for anything other than FQ. If the primary SCE proponents had attempted to articulate a way to use SCE in a single queue or a dual queue, as you have, that would have taken my fire.

But regardless, the queue-building from classic ECN-capable endpoints that
only get 1 congestion signal per RTT is what I understand as the main
downside of the tradeoff if we try to use ECN-capability as the dualq
classifier.  Does that match your understanding?
This is indeed a major concern of mine (not as major as the starvation of SCE explained under Q4, but we'll come to that).

Fine-grained (DCTCP-like) and coarse-grained (Cubic-like) congestion controls need to be isolated, but I don't see how, unless their packets are tagged for separate queues. Without a specific fine/coarse identifier, we're left with having to re-use other identifiers:
  • You've tried to use ECN vs Not-ECN. But that still lumps two large incompatible groups (fine ECN and coarse ECN) together.
  • The only alternative that would serve this purpose is the flow identifier at layer-4, because it isolates everything from everything else. FQ is where SCE started, and that seems to be as far as it can go.
Should we burn the last unicorn for a capability needed on "carrier-scale" boxes, but which requires FQ to work? Perhaps yes if there was no alternative. But there is: L4S.


I have a problem to understand why all traffic ends up to be classified as either Cubic-like or DCTCP-like. 
If we know that this is not true today I fail to understand why this should be the case in the future. 
It is also difficult to predict now how applications will change in the future in terms of the traffic mix they'll generate.
I feel like we'd be moving towards more customized transport services with less predictable patterns.

I do not see for instance much discussion about the presence of RTC traffic and how the dualQ system behaves when the 
input traffic does not respond as expected by the 2-types of sources assumed by dualQ.

If my application is using simulcast or multi-stream techniques I can have several video streams in the same link,  that, as far as I understand,
will get significant latency in the classic queue. Unless my app starts cheating by marking packets to get into the priority queue.

In both cases, i.e. my RTC app is cheating or not, I do not understand how the parametrization of the dualQ scheduler 
can cope with traffic that behaves in a different way to what is assumed while tuning parameters. 
For instance, in one instantiation of dualQ based on WRR the weights are set to 1:16.  This has to necessarily 
change when RTC traffic is present. How?

Is the assumption that a trusted marker is used as in typical diffserv deployments
or that a policer identifies and punishes cheating applications?

BTW I'd love to understand how dualQ is supposed to work under more general traffic assumptions.

Luca