[Bloat] Random idea in reaction to all the discussion of TCPflavours - timestamps?

Jonathan Morton chromatix99 at gmail.com
Sun Mar 20 04:40:52 PDT 2011


On 18 Mar, 2011, at 8:49 pm, Fred Baker wrote:

>> How about trying to push for a default, that the logical egress buffers are limited to say 90% of the physical capacity, and only ECN-enabled flows may use the remaining 10% when they get marked...
> 
> Lots of questions in that; 90% of the buffers in what? In a host, in a router, on a card in a router, in a queue's configured maximum depth, what? One would need some pedagogic support in the form of simulations - why 90% vs 91% vs 10% vs whatever?
> 
>> Someone has to set an incentive for using ECN, unfortunately...
> 
> Yes. From my perspective, the right approach is probably more like introducing a mark/drop threshold and a drop threshold. Taking the model that every M time units we "do something" like
> 
>      if queue depth exceeds <toobig>
>         reduce M
>         drop something
>      else if queue depth exceeds <big>
>         reduce M
>         select something
>         if it is ECN-capable, 
>             mark it congestion-experienced
>         else
>             drop it
>      else is below <hysteresis limit>
>         increase M
> 
> the advantage of ECN traffic is that it is less likely to be dropped. That might be a reasonable approach.

That does actually seem reasonable.  What's the betting that HW vendors still say it's too complicated?  :-D

I think we can come up with some simple empirical rules for choosing queue sizes.  I may be half-remembering something VJ wrote, but here's a starting point:

0) Buffering more than 1 second of data is always unacceptable.

1) Measure (or estimate) the RTT of a full-sized packet over the exit link and back, then add 100ms for typical Internet latency, calling this total T1.  If T1 is more than 500ms, clamp it to 500ms.  Calculate T2 to be twice T1; this will be at most 1000ms.

2) Measure (or estimate) the throughput BW of the exit link, in bytes per second.

3) Calculate ideal queue length (in bytes) Q1 as T1 * BW, and the maximal queue length Q2 as T2 * BW.  These may optionally be rounded to the nearest multiple of a whole packet size, if that is convenient for the hardware.

4) If the link quality is strongly time-varying, eg. mobile wireless, recalculate Q1 and Q2 as above regularly.

5) If the link speed depends on the type of equipment at the other end, the quality of cabling, or other similar factors, use the actual negotiated link speed when calculating BW.  When these factors change, recalculate as above.

I would take the "hysteresis limit" to be an empty queue for the above algorithm.

 - Jonathan



More information about the Bloat mailing list