[Cerowrt-devel] happy 4th!
dpreed at reed.com
dpreed at reed.com
Sun Jul 7 14:52:58 EDT 2013
Whereever the idea came from that you "had to buffer RTT*2" in a midpath node, it is categorically wrong.
What is possibly relevant is that you will have RTT * bottleneck-bit-rate bits "in flight" from end-to-end in order not to be constrained by the acknowledgement time. That is: TCP's outstanding "window" should be RTT*bottleneck-bit-rate to maximize throughput. Making the window *larger* than that is not helpful.
So when somebody "throws that in your face", just confidently use the words "Bullshit, show me evidence", and ignore the ignorant person who is repeating an urban legend similar to the one about the size of crocodiles in New York's sewers that are supposedly there because people throw pet crocodiles down there.
If you need a simplified explanation of why having 2*RTT-in-the-worst-case-around-the-world * maximum-bit-rate-on-the-path, all you need to think about is what happens when some intermediate huge bottleneck buffer fills up (which it certainly will, very quickly, since by definition the paths feeding it have much higher delivery rates than it can handle).
What will happen? A packet will be silently discarded from the "tail" of the queue. But that packet's loss will not be discovered by the endpoints until the "bottleneck-bit-rate" * the worst-case-RTT * 2 (or maybe 4 if the reverse path is similarly clogged) seconds later. Meanwhile the sources would have happily *sustained* the size of the bottleneck's buffer, by putting out that many bits past the lost packet's position (thinking all is well).
And so what will happen? most of the following packets behind the lost packet will be retransmitted by the source again. This of course *doubles* the packet rate into the bottleneck.
And there is an infinite regression - all the while there being a solidly maintained extremely long queue of packets that are waiting for the bottleneck link. Many, many seconds of end-to-end latency on that link, perhaps.
Only if all users "give up and go home" for the day on that link will the bottleneck link's send queue ever drain. New TCP connections will open, and if lucky, they will see a link with delays from earth-to-pluto as its norm on their SYN/SYN-ACK. But they won't get better service than that, while continuing to congest the node.
What you need is a message from the bottleneck link to say "WHOA - I can't process all this traffic". And that happens *only* when that link actually drops packets after about 50 msec. or less of traffic is queued.
On Thursday, July 4, 2013 1:57am, "Mikael Abrahamsson" <swmike at swm.pp.se> said:
> On Wed, 3 Jul 2013, Dave Taht wrote:
>
> > Suggestions as to things to test and code to test them welcomed. In
>
> I'm wondering a bit what the shallow buffering depth means to higher-RTT
> connections. When I advocate bufferbloat solutions I usually get thrown in
> my face that shallow buffering means around-the-world TCP-connections will
> behave worse than with a lot of buffers (traditional truth being that you
> need to be able to buffer RTT*2).
>
> It would be very interesting to see what an added 100ms
> (<http://stackoverflow.com/questions/614795/simulate-delayed-and-dropped-packets-on-linux>)
> and some packet loss/PDV would result in. If it still works well, at least
> it would mean that people concerned about this could go back to rest.
>
> Also, would be interesting to see is Googles proposed QUIC interacts well
> with the bufferbloat solutions. I imagine it will since it in itself
> measures RTT and FQ_CODEL is all about controlling delay, so I imagine
> QUIC will see a quite constant view of the world through FQ_CODEL.
>
> --
> Mikael Abrahamsson email: swmike at swm.pp.se
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cerowrt-devel/attachments/20130707/161e8385/attachment-0002.html>
More information about the Cerowrt-devel
mailing list