[Bloat] Random idea in reaction to all the discussion of TCP flavours - timestamps?

Rick Jones rick.jones2 at hp.com
Tue Mar 15 14:14:37 EDT 2011


On Tue, 2011-03-15 at 10:59 -0700, Don Marti wrote:
> begin Jonathan Morton quotation of Tue, Mar 15, 2011 at 06:47:17PM +0200:
> > On 15 Mar, 2011, at 4:40 pm, Jim Gettys wrote:
> > 
> > > There is an interesting question about what "long term minimum" means here...
> > 
> > VJ does expand on that in "RED in a different light".  He means that the relevant measure of queue length is to take the minimum value over some interval of time, say 100ms or 1-2 RTTs, whichever is longer.  The average queue length is irrelevant.  The nRED algorithm in that paper proposes a method of doing that.
> 
> It seems like a host ought to be able to track the
> dwell time of packets in its own buffer(s), and drop
> anything that it held onto too long.
> 
> Timestamp every packet going into the buffer, and
> independently of any QoS work, check if a packet is
> "stale" on its way out, and if so, drop it instead of
> sending it.  Is this in use anywhere?  Haven't seen
> it in the literature I've read linked to from Jim's
> blog and this list.

Are there any NICs setup to allow (efficient) removal of packets from
the transmit queue (the one known to the NIC) once they have become
known to the NIC?  I'm not a driver writer (I've only complained to them
that their drivers were using too much CPU :), but what little I've seen
suggests that the programming models of most (all?) NICs are such that
they assume the producer index only ever increases (modulo the queue
size)...  Or put another way, the host giveth, but only the NIC taketh
away.

rick jones




More information about the Bloat mailing list