[Bloat] Random idea in reaction to all the discussion of TCP flavours - timestamps?
Dave Täht
d at taht.net
Tue Mar 15 18:59:19 PDT 2011
Jonathan Morton <chromatix99 at gmail.com> writes:
> On 16 Mar, 2011, at 3:02 am, Dave Täht wrote:
>
>>>>> 1) Wired devices, where we want to push more 10+ Gbps, so we can assume
>>>>> a posted skb is transmitted immediately. Even a basic qdisc can be a
>>>>> performance bottleneck. Set TX ring size to 256 or 1024+ buffers to
>>>>> avoid taking too many interrupts.
>>>>
>>>> To talk to this a bit, the huge dynamic range discrepancy between a
>>>> 10GigE device and what it may be connected to worries me. Some form of
>>>> fair queuing should be applied before the data hits the driver.
>>>
>>> You mean plugging a 10GigE card into a 10Base-T hub? :-D
>>
>> More like 10GigE into a 1Gig switch. Or spewing out the entire contents
>> of a stream to one destination across the internet.
>
> Then that's no different to what I have in my apartment right now - a
>GigE switch connected to a 100base-TX switch, then to a 2Mbps DSL
>uplink, which could then be routed (after bouncing around backhauls for
>a bit) through a 500Kbps 3G downlink to a computer I've isolated from
>the LAN.
Having won one battle today (with ecn and pfifo) I'm ill inclined to
start another...
The problem with your analogy is you are starting from the edge in,
rather from the center out. Consider a topology such as
10 Gig E - 10Gig Switch to 1Gig
| | | | | | | | | |
1 2 3 4 5 6 7 8 9 0
And the server connected servicing hundreds of flows. Statistically,
with fair queuing the number of receive buffers required per port will
be close to or equal to 1, where in a primitive FIFO setup, something >
10 are required.
Multiply the number of ports that are effectively strewn across the
eventual path along with the number of streams, and the enormous
disparity between the source data rate and the receive data rate is
lessened.
This is what I mean by talking to fair queuing or any of a zillion
variants thereof.
> If the flow is responsive, as with every sane TCP, the queue will end
But isn't with large buffers that we are dealing with at present with
bufferbloat.
> up in front of the slowest link - at the 3G tower. That's where the
> AQM would need to be. The GigE adapter in my nettop would be largely
> idle, as a normal function of the TCP congestion window.
Yes. But you started from a different place in your analogy than mine.
>
> If it isn't, the queue will build up at *every* narrowing of the
> channel and the packet loss will be astronomical. All AQM could do
> then is to pick any real traffic out of the din.
Never said we had only one problem. :)
>
> - Jonathan
>
--
Dave Taht
http://nex-6.taht.net
More information about the Bloat
mailing list