[Bloat] [Ecn-sane] sce materials from ietf

Sebastian Moeller moeller0 at gmx.de
Mon Dec 2 02:18:48 EST 2019


Hi Dave,

On December 2, 2019 6:10:51 AM GMT+01:00, Dave Taht <dave at taht.net> wrote:
>Sebastian Moeller <moeller0 at gmx.de> writes:
>
>> Hi Rodney,
>>
>>
>>> On Dec 1, 2019, at 18:30, Rodney W. Grimes <4bone at gndrsh.dnsmgr.net>
>wrote:
>>> 
>>>> Hi Jonathan,
>>>> 
>>>> 
>>>>> On Nov 30, 2019, at 23:23, Jonathan Morton <chromatix99 at gmail.com>
>wrote:
>>>>> 
>>>>>> On 1 Dec, 2019, at 12:17 am, Carsten Bormann <cabo at tzi.org>
>wrote:
>>>>>> 
>>>>>>> There are unfortunate problems with introducing new TCP options,
>>>>>>> in that some overzealous firewalls block traffic which uses
>>>>>>> them.  This would be a deployment hazard for SCE, which merely
>>>>>>> using a spare header flag avoids.  So instead we are still
>>>>>>> planning to use the spare bit - which happens to be one that
>>>>>>> AccECN also uses, but AccECN negotiates in such a way that SCE
>>>>>>> can safely use it even with an AccECN capable partner.
>>>>>> 
>>>>>> This got me curious:  Do you have any evidence that firewalls are
>friendlier to new flags than to new options?
>>>>> 
>>>>> Mirja Kuhlewind said as much during the TCPM session we attended,
>>>>> and she ought to know.  There appear to have been several studies
>>>>> performed on this subject; reserved TCP flags tend to get ignored
>>>>> pretty well, but unknown TCP options tend to get either stripped
>>>>> or blocked.
>>>>> 
>>>>> This influenced the design of AccECN as well; in an early version
>>>>> it would have used only a TCP option and left the TCP flags alone.
>>>>> When it was found that firewalls would often interfere with this,
>>>>> the three-bit field in the TCP flags area was cooked up.
>>>> 
>>>> 
>>>> 	Belt and suspenders, eh? But realistically, the idea of using
>>>> an accumulating SCE counter to allow for a lossy reverse ACK path
>>>> seems sort of okay (after all TCP relies on the same, so there
>>>> would be a nice symmetry ).
>>>> I really wonder whether SCE could not, in addition to its current
>>>> bit, borrow the URG pointer field in cases when it is not used, or
>>>> not fully used (if the MSS is smaller than 64K there might be a few
>>>> bits leftover, with an MTU < 2000 I would expect that ~5 bits might
>>>> still be usable in that rate case). I might be completely of to
>>>> lunch here, but boy a nice rarely used contiguous 16bit field in
>>>> the TCP header, what kind of mischief one could arrange with that
>>>> ;) Looking at the AccECN draft, I see that my idea is not terribly
>>>> original... But, hey for SCE having an additional higher fidelity
>>>> SCE counter might be a nice addition, assuming URG(0), urgent
>>>> pointer > 0 will not bleached/rejected by uninitiated TCP
>>>> stacks/middleboxes...
>>> 
>>> We need to fix the ACK issues rather than continue to work around
>>> it.  Ack thinning is good, as long as it does not cause information
>>> loss.  There is no draft/RFC on this, one needs to be written that
>>> explains you can not just ignore all the bits, you have to preserve
>>> the reserve bits, so you can only thin if they are the same.
>>> Jonathan already fixed Cake (I think that is the one that has ACK
>>> thinning) to not collapse ACK's that have different bit 7 values.
>>
>> 	Well, I detest ACK thinning and believe that the network
>> should not try to second guess the users traffic (dropping/marking on
>> reaching capacity is acceptable, but the kind of silent ACK thinning
>> some DOCSIS ISPs perform seems actively user-hostile). But thinning
>or
>> no thinning, the accumulative signaling is how the ACK stream deals
>> with (reasonably) lossy paths, and I think any additional signaling
>> via pure ACK packets should simply be tolerant to unexpected losses.
>I
>> fully agree that if ACK thinning is performed it really should be
>> careful to not loose information when doing its job, but SCE
>hopefully
>> can deal with whatever is out in the field today (I am looking at you
>> DOCSIS uplinks...), no?
>
>I happen to not be huge on ack thinning either, but the effect
>on highly assymetric networks was pretty convincing, and having
>to handle less acks at the sender a potential goodness also.
>
>http://blog.cerowrt.org/post/ack_filtering/
>
>At the time we did I thought it could be made even better,
>if we allowed more droppable packets to accumulate on each
>round, it would both be "fairer" and be able to "drop more"
>over each round.
>
>https://github.com/dtaht/sch_cake/issues/104
>
>Never got around to it.
>
>I'd much rather have fewer highly assymmetric networks, 

         +1, I will not hold my breath though on getting this anytime soon... GPON by default is asymmetric (2:1) and full duplex DOCSIS requires costly plant changes and got moved from the 3.1 spec to 4, and the DSLs simply have no symmetric Bandplans I know of (well G.fast has, but that only helps if the G.fast uplink is not running over GPON). But then GPON's 2:1 ratio would already be most of the way to symmetry...

and the
>endpoint tcps do the thinning (which is what more or less happens
>with GSO), but....

         Yepp, the endpoints basically show be in control of the ACK rate, but also should be considerate.


>
>secondly, I note that "ack prioritization" is a very common thing in
>multiple shapers I've looked at, starting with wondershaper and in many
>others (including dd-wrt). A lot of these are *wrong*, wondershaper,
>for
>example, only recognized 64 byte acks. I think more than a few modems
>do ack prioritization rather than "thinning".

         I believe indiscriminate ACK boosting to be the wrong thing for a tiered prioritization scheme, as ACKs should have the same priority as the rest of the flow. But for the fidelity of the feedback loop, less delay for ACKs seems benign, no?


>
>thirdly, protocols such as QUIC are already sending less
>acknowlegements per packet than most TCP do, which is a a good thing.

         Again, +1, the endpoints should know best.


>
>fourthly, I've been meaning to try thinning on wifi for a while. Wifi
>has a problem in that only a fixed number of packets can fit
>in a txop and everything in a txop is usually sent reliably. 
>
>Here's 5 days worth of data from one of my sites. It's not hugely
>loaded in the uplink direction, but roughly 11% of all packets are
>dropped. 

        Almost all of those were ACKs though, I guess I see why you consider it unwise to hoist these over the wifi link only to filter them at your edge router....

Best Regards
        Sebastian


>
>
>qdisc cake 8007: dev eth0 root refcnt 9 bandwidth 9Mbit diffserv3
>triple-isolate nat nowash ack-filter split-gso rtt 100.0ms noatm
>overhead 18 mpu 64 
>Sent 13088217784 bytes 96513781 pkt (dropped 12173093, overlimits
>155529797 requeues 558) 
> backlog 0b 0p requeues 558
> memory used: 1144944b of 4Mb
> capacity estimate: 9Mbit
> min/max network layer size:           28 /    1500
> min/max overhead-adjusted size:       64 /    1518
> average network hdr offset:           14
>
>                   Bulk  Best Effort        Voice
>  thresh      562496bit        9Mbit     2250Kbit
>  target         32.3ms        5.0ms        8.1ms
>  interval      127.3ms      100.0ms      103.1ms
>  pk_delay        4.7ms        2.0ms        709us
>  av_delay        1.3ms        162us         69us
>  sp_delay         50us          3us          3us
>  backlog            0b           0b           0b
>  pkts           150501    108280345       256028
>  bytes       146280265  13846693704     40682021
>  way_inds          181      7552458        26288
>  way_miss         6579      1383844        20861
>  way_cols            0            0            0
>  drops             125         2682            0
>  marks             171          277            0
>  ack_drop            0     12170286            0
>  sp_flows            2            5            0
>  bk_flows            0            2            0
>  un_flows            0            0            0
>  max_len          4542        28766         2988
>  quantum           300          300          300
>
>>
>>> 
>>> Note that I consider the time of the arriving ACKS to also be
>>> informaition, RACK for instance uses that, so in the case of RACK
>>> any thinning could be considered bad.
>>
>> 	I am with you here, if the end-points decided to exchange
>> packets the network should do its best to deliver these. That is
>> orthogonal to the question whether a every-two-MSS packets ACK rate
>is
>> ideal for all/most applications.
>>
>>> BUTT I'll settle for not tossing reserved bit changes away as a
>>> "good enough" step forward that should be simple to implement (2
>>> gate delay xor/or function).
>>
>> 	Fair enough, question is more, what behavior happens out in
>> the field, and could any other bit be toggled ACK by ACK to reduce
>the
>> likelihood of an ACK filte to trigger?
>>
>> Best Regards
>> 	Sebastian
>>
>>
>>> 
>>>> 	Sebastian
>>>>> - Jonathan Morton
>>> -- 
>>> Rod Grimes                                                
>rgrimes at freebsd.org
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



More information about the Bloat mailing list