[Bloat] Random idea in reaction to all the discussion of TCP flavours - timestamps?

Fred Baker fred at cisco.com
Tue Mar 15 22:41:04 PDT 2011


From my perspective, the way to address this is to go back to first principles.

Van/Sally's congestion control algorithms, both the loss sensitive ones and ECN, depend heavily on Jain's work in the 1980's, his patent dated 1994, and the Frame Relay and CLNS work on FECN and "Congestion Experienced". Jain worked from a basic concept, that of a "knee" and a "cliff":

In concept:

      A |
     g| |     "capacity" or "bottleneck bandwidth"
     o| |- - - - - - - - - - - - - - - - - - - - -
     o  |            .'        \
     d  |          .' knee      \ cliff
     p  |        .'              \
     u  |      .'                 \
     t  |    .'                    \
        |  .'                       \
        |.'                          \
        +-----------------------------------------
                window -->

More likely reality:
        |
        |     "capacity" or "bottleneck bandwidth"
     g  |- - - - - - - - - - - - - - - - - - - - -
     o  |              __..,--=
     o  |           _,'        `.
     d  |        _,' |<-->|   |<-->|,
     p  |      ,'     knee     cliff \
     u  |    ,'                       \
     t  |  ,'                          `.._
        | /                                `'-----'
        +`-----------------------------------------
                window  -->

In short, the "knee" corresponds with the least window that maximizes goodput, and the cliff corresponds with the largest window value that maximizes goodput *plus*one* - the least window that results in a reduction of goodput. In real practice, it's more approximate than the theory might suggest, but the concept measurably works.

The question is how you measure whether you have reached it. Van's question of mean queue depth, from my perspective, is a good approximation but misses the point. You can think about the knee in another way; if it is least window that maximizes goodput, it is a point at which increasing your window is unlikely to increase your goodput. From the network's perspective, that is the point at which the queue always has something in it. There are lots of interesting ways to measure and state that, but it's what it comes down to. There is no more unused bandwidth at the bottleneck that adding another packet to the mix will take advantage of. At that point, it's a zero sum game: if one session manages to increase its share of the available capacity, some other session or set of sessions has to slow down.

From that perspective, one could argue that the simplest approach is to note the wall-clock time whenever a class or queue's depth falls below some threshold. If the class or queue goes longer than <mumble> without doing that, flagging an ECN CE or dropping a packet is probably in order. The thing is - you want <mumble> to be variable, so that your mark/drop rate can track a reasonable number. If you do that, the queue will remain somewhere in the neighborhood of the knee, and all of its sessions will as well. The question isn't "what is the magic mean queue depth for min-threshold to be set to"; it's "what mark/drop rate is sufficient to keep the queue somewhat shallow most of the time".  


More information about the Bloat mailing list