[Bloat] Designer of a new HW gadget wishes to avoid bufferbloat

Albert Rafetseder albert.rafetseder+bufferbloat at univie.ac.at
Mon Oct 22 11:58:14 EDT 2012


Hi Michael,

Not that I'm an expert of any means on the topics you touch, but I'll share my point of view on some of the questions raised. Please excuse my aggressive shortening of your original post.

> http://ifctfvax.Harhan.ORG/OpenWAN/OSDCU/

The site appears down, but from your description I think I understand what you are building.

> The SDSL->HDLC direction involves no bufferbloat issues: I can set
> things up so that no received packet ever has to be dropped, and the
> greatest latency that may be experienced by any packet is the HDLC
> side (DSU->DTE router) transmission time of the longest packet size
> allowed by the static configuration - and I can statically prove that
> both conditions I've just stated will be satisfied given a rather
> small buffer of only M+1 ATM cells, where M is the maximum packet size
> set by the static configuration, translated into ATM cells.  (For IPv4
> packets of up to 1500 octets, including the IPv4 header, using the
> standard RFC 1483 encapsulation, M=32.)

You must also rule out ATM cell loss and reordering. Otherwise, there is too little data in your receive buffer to reassemble the transmitted frame (temporarily with reordering, terminally with loss). This calls for a timeout of sorts. Quoting RFC 791 "Internet Protocol":

"The current recommendation for the initial timer setting is 15 seconds."

My colleague tells me his Linux boxes (3.5/x64, 2.6.32/x86) have an ip.ipfrag_time of 30 seconds. Anyway, it's lots of cells to buffer, I suppose.

> Strictly speaking, one could set the bit rate for the router->DSU
> direction of the V.35 interface so low that no matter what the router
> sends, that packet stream will always fit on the SDSL side without a
> packet ever having to be dropped.  However, because the worst case
> expansion in the HDLC->SDSL direction is so high (in one hypothetical
> case I've considered, UDP packets with 5 octets of payload, such that
> each IPv4 packet is 33 octets long, the RFC 1490->1483 expansion is
> 2.4x *before* the cell tax!), setting the clock so slow that even a
> continuous HDLC line rate stream of worst-case packets will fit is not
> a serious proposition.

Setting the ingress rate into your board on the HDLC side that low also pushes the potential bloat issue into the router's transmit buffer which might be even more difficult to control.

> L is the maximum allowed HDLC->SDSL packet latency measured in ATM
> cells, which directly translates into milliseconds for each given SDSL
> kbps tier [...]
>  My proposed logic design will drop the packet if that
> latency measure exceeds a set threshold, or allow it through
> otherwise.  My questions to the list are:
> 
> a) would it be a good packet drop policy, or not?
> 
> b) if it is a good policy, what would be a reasonable value for the
>   latency threshold L?  (In ATM cells or in ms, I can convert :)
> 
> The only major downside I can see with the approach I've just outlined
> is that it is a tail drop.  I've heard it said in the bufferbloat
> community that tail drop is bad and head drop is better.  However,
> implementing head drop or any other policy besides tail drop with the
> HW logic design outlined above would be very difficult: if the buffer
> is physically structured as a queue of ATM cells, rather than packets,
> then deleting a packet from the middle of the queue (it does no good
> to abort the transmission of a packet already started, hence head drop
> effectively becomes middle drop in terms of ATM cells) becomes quite a
> challenge.

Canceling packets halfway through transmission makes no sense. I think there are many good arguments for head-drop, and while I'm not a hardware engineer, I don't see why it would be terribly difficult to implement. Suppose you have a list of starts of packets addresses within your cell transmit ring buffer. Drop the first list element. Head drop! Done! Once the current packet's cells are transmitted, you jump to the next start of packet  in the list, wherever it is. As long as you ensure that packets you receive on the HDLC side don't overwrite the cells you are currently transmitting, you are fine. (If the buffer was really small, this probably meant lots of head drop.)

As regards values for the maximum latency L, this might be a too restrictive way of thinking of queue lengths. Usually, you'd like to accommodate for short bursts of packets if you are sure you can get the packets out in reasonable time, but don't want a long "standing queue" to develop. You might have read over at ACM Queue [1] that the CoDel queue management algorithm starts dropping packets if the time packets are enqueued exceeds a threshold. This doesn't mean that the queue length has a hard upper limit, though -- it will try to keep the queue short (as dropping packets signals "back off!" to the end hosts communicating across the link) with some elasticity towards longer enqueueing times. Maybe you'd like to be the pioneer acid-testing CoDel's implementability in silicon.

[1] http://queue.acm.org/detail.cfm?id=2209336

Does any of this make sense to you?

Cheers,
 Albert.

-- 
Albert Rafetseder BSc. MSc.
Member of Scientific Staff

University of Vienna
Faculty of Computer Science
Research Group Future Communication (Endowed by A1 Telekom Austria AG)
Waehringer Strasse 29/5.46, A-1090 Vienna, Austria

T +43-1-42777-8620
albert.rafetseder at univie.ac.at
http://www.cs.univie.ac.at/fc






More information about the Bloat mailing list