[Bloat] Goodput fraction w/ AQM vs bufferbloat

Richard Scheffenegger rscheff at gmx.at
Sun May 8 08:34:44 EDT 2011


Hi Fred,

Goodput can really only be measured at the sender; by definition, any 
retransmitted packet will reduce goodput vs throughput; In your example, 
where each segment is retransmitted once, goodput would be - at most - 0.5, 
not 1.0... IMHO defining the data volume after the bottleneck by itself as 
goodput is also a bit short-sighted, because a good fraction of that data 
may still be discarded by TCP for numerous reasons, ultimately (ie, legacy 
go-back-n RTO recovery by the sender)...

Measuring at the receiver (or in-path network) side, on a SACK enabled 
session, will miss all the instances where the last (or a number of segments 
running up to and including the last) segment was lost, or where a 
retransmitted segment was lost twice.

The former can be approximated by checking the RTOs (which would require 
already some heuristic to come up with a good approximation of what the 
sender's RTO timeout is likely to be - the IETF RFC 1sec prescribed minRTO 
is virtually never used). The latter, where retransmitted segments are also 
lost, you can only infer indirectly about the senders behavior from a 
receiver-side (or in-path ) trace, again because lost retransmission 
detection is done by one stack (Linux), but not by the others, and RTOs can 
again not be evaded under all circumstances.


But back to my original question: When looking at modern TCP stacks, with 
TSO, if the bufferbloat allows the senders cwnd to grow beyond thresholds 
which allow the aggressive use of TSO (64kB or even 256kB of data allowed in 
the senders cwnd), the effective sending rate of such a burst will be 
wirespeed (no interleaving segments of other sessions). As pointed out in 
other mails to this thread, if the bottleneck has then 1/10th the capacity 
of the senders wire (and is potentially shared among multiple senders), at 
least 90% of all the sent data of such a TSO segment train will be dropped 
in a single burst of loss... With proper AQM, and some (single segment) loss 
earlier, cwnd may never grow to trigger TSO in that way, and the goodput (1 
segment out of 64kB data, vs. 58kB out of 64kB data) is obviously shifted 
extremely to the scenario with AQM...

So, qualitatively, a ISP with proper AQM should be able to have a better 
Goodput (downloads from upstream or uploads to upstream ISP); However, 
pricing is typically done on data volume exchanged - if goodput is lower, an 
inverse number of higher volume is necessary, to achive the same "real" data 
exchange.

However, the next question becomes, how to quanitfy this on large scale - if 
the monetary difference is, say, in the vicinity of 2-3% saved (average 
internet loss ratio), that accumulates to huge sums for small / medium ISPs 
(which get charged more per volume than large ISPs).

If the quantitative difference is only 0,02-0,05%, say, than the incentive 
of enabling AQMs in small ISPs is not really there in monetary terms (and 
these ISPs would have to be motivated by other, typically much less strong 
incentives).

Best regards,
   Richard

----- Original Message ----- 
From: "Fred Baker" <fredbakersba at gmail.com>
To: "Jim Gettys" <jg at freedesktop.org>
Cc: <bloat at lists.bufferbloat.net>
Sent: Friday, May 06, 2011 6:18 AM
Subject: Re: [Bloat] Goodput fraction w/ AQM vs bufferbloat


> There are a couple of ways to approach this, and they depend on your 
> network model.
>
> In general, if you assume that there is one bottleneck, losses occur in 
> the queue at the bottleneck, and are each retransmitted exactly once (not 
> necessary, but helps), goodput should approximate 100% regardless of the 
> queue depth. Why? Because every packet transits the bottleneck once - if 
> it is dropped at the bottleneck, the retransmission transits the 
> bottleneck. So you are using exactly the capacity of the bottleneck.
>
> the value of a shallow queue is to reduce RTT, not to increase or decrease 
> goodput. cwnd can become too small, however; if it is possible to set cwnd 
> to N without increasing queuing delay, and cwnd is less than N, you're not 
> maximizing throughput. When cwnd grows above N, it merely increases 
> queuing delay, and therefore bufferbloat.
>
> If there are two bottlenecks in series, you have some probability that a 
> packet transits one bottleneck and doesn't transit the other. In that 
> case, there is probably an analytical way to describe the behavior, but it 
> depends on a lot of factors including distributions of competing traffic. 
> There are a number of other possibilities; imagine that you drop a packet, 
> there is a sack, you retransmit it, the ack is lost, and meanwhile there 
> is another loss. You could easily retransmit the retransmission 
> unnecessarily, which reduces goodput. The list of silly possibilities goes 
> on for a while, and we have to assume that each has some probability of 
> happening in the wild.
>
>
>
> On May 5, 2011, at 9:01 AM, Jim Gettys wrote:
>
>> On 04/30/2011 03:18 PM, Richard Scheffenegger wrote:
>>> I'm curious, has anyone done some simulations to check if the following 
>>> qualitative statement holds true, and if, what the quantitative effect 
>>> is:
>>>
>>> With bufferbloat, the TCP congestion control reaction is unduely 
>>> delayed. When it finally happens, the tcp stream is likely facing a 
>>> "burst loss" event - multiple consecutive packets get dropped. Worse 
>>> yet, the sender with the lowest RTT across the bottleneck will likely 
>>> start to retransmit while the (tail-drop) queue is still overflowing.
>>>
>>> And a lost retransmission means a major setback in bandwidth (except for 
>>> Linux with bulk transfers and SACK enabled), as the standard (RFC 
>>> documented) behaviour asks for a RTO (1sec nominally, 200-500 ms 
>>> typically) to recover such a lost retransmission...
>>>
>>> The second part (more important as an incentive to the ISPs actually), 
>>> how does the fraction of goodput vs. throughput change, when AQM schemes 
>>> are deployed, and TCP CC reacts in a timely manner? Small ISPs have to 
>>> pay for their upstream volume, regardless if that is "real" work 
>>> (goodput) or unneccessary retransmissions.
>>>
>>> When I was at a small cable ISP in switzerland last week, surely enough 
>>> bufferbloat was readily observable (17ms -> 220ms after 30 sec of a bulk 
>>> transfer), but at first they had the "not our problem" view, until I 
>>> started discussing burst loss / retransmissions / goodput vs 
>>> throughput - with the latest point being a real commercial incentive to 
>>> them. (They promised to check if AQM would be available in the CPE / 
>>> CMTS, and put latency bounds in their tenders going forward).
>>>
>> I wish I had a good answer to your very good questions.  Simulation would 
>> be interesting though real daa is more convincing.
>>
>> I haven't looked in detail at all that many traces to try to get a feel 
>> for how much bandwidth waste there actually is, and more formal studies 
>> like Netalyzr, SamKnows, or the Bismark project would be needed to 
>> quantify the loss on the network as a whole.
>>
>> I did spend some time last fall with the traces I've taken.  In those, 
>> I've typically been seeing 1-3% packet loss in the main TCP transfers. 
>> On the wireless trace I took, I saw 9% loss, but whether that is 
>> bufferbloat induced loss or not, I don't know (the data is out there for 
>> those who might want to dig).  And as you note, the losses are 
>> concentrated in bursts (probably due to the details of Cubic, so I'm 
>> told).
>>
>> I've had anecdotal reports (and some first hand experience) with much 
>> higher loss rates, for example from Nick Weaver at ICSI; but I believe in 
>> playing things conservatively with any numbers I quote and I've not 
>> gotten consistent results when I've tried, so I just report what's in the 
>> packet captures I did take.
>>
>> A phenomena that could be occurring is that during congestion avoidance 
>> (until TCP loses its cookies entirely and probes for a higher operating 
>> point) that TCP is carefully timing it's packets to keep the buffers 
>> almost exactly full, so that competing flows (in my case, simple pings) 
>> are likely to arrive just when there is no buffer space to accept them 
>> and therefore you see higher losses on them than you would on the single 
>> flow I've been tracing and getting loss statistics from.
>>
>> People who want to look into this further would be a great help.
>>                - Jim
>>
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat 




More information about the Bloat mailing list