[Cerowrt-devel] Ubiquiti QOS

David Lang david at lang.hm
Tue May 27 19:27:45 EDT 2014


On Tue, 27 May 2014, Dave Taht wrote:

> There is a phrase in this thread that is begging to bother me.
>
> "Throughput". Everyone assumes that throughput is a big goal - and it
> certainly is - and latency is also a big goal - and it certainly is -
> but by specifying what you want from "throughput" as a compromise with
> latency is not the right thing...
>
> If what you want is actually "high speed in-order packet delivery" -
> say, for example a movie,
> or a video conference, youtube, or a video conference - excessive
> latency with high throughput, really, really makes in-order packet
> delivery at high speed tough.

the key word here is "excessive", that's why I said that for max throughput you 
want to buffer as much as your latency budget will allow you to.

> You eventually lose a packet, and you have to wait a really long time
> until a replacement arrives. Stuart and I showed that at last ietf.
> And you get the classic "buffering" song playing....

Yep, and if you buffer too much, your "lost packet" is actually still in flight 
and eating bandwidth.

David Lang

> low latency makes recovery from a loss in an in-order stream much, much faster.
>
> Honestly, for most applications on the web, what you want is high
> speed in-order packet delivery, not
> "bulk throughput". There is a whole class of apps (bittorrent, file
> transfer) that don't need that, and we
> have protocols for those....
>
>
>
> On Tue, May 27, 2014 at 2:19 PM, David Lang <david at lang.hm> wrote:
>> the problem is that paths change, they mix traffic from streams, and in
>> other ways the utilization of the links can change radically in a short
>> amount of time.
>>
>> If you try to limit things to exactly the ballistic throughput, you are not
>> going to be able to exactly maintain this state, you are either going to
>> overshoot (too much traffic, requiring dropping packets to maintain your
>> minimal buffer), or you are going to undershoot (too little traffic and your
>> connection is idle)
>>
>> Since you can't predict all the competing traffic throughout the Internet,
>> if you want to maximize throughput, you want to buffer as much as you can
>> tolerate for latency reasons. For most apps, this is more than enough to
>> cause problems for other connections.
>>
>> David Lang
>>
>>
>>  On Mon, 26 May 2014, David P. Reed wrote:
>>
>>> Codel and PIE are excellent first steps... but I don't think they are the
>>> best eventual approach.  I want to see them deployed ASAP in CMTS' s and
>>> server load balancing networks... it would be a disaster to not deploy the
>>> far better option we have today immediately at the point of most leverage.
>>> The best is the enemy of the good.
>>>
>>> But, the community needs to learn once and for all that throughput and
>>> latency do not trade off. We can in principle get far better latency while
>>> maintaining high throughput.... and we need to start thinking about that.
>>> That means that the framing of the issue as AQM is counterproductive.
>>>
>>> On May 26, 2014, Mikael Abrahamsson <swmike at swm.pp.se> wrote:
>>>>
>>>> On Mon, 26 May 2014, dpreed at reed.com wrote:
>>>>
>>>>> I would look to queue minimization rather than "queue management"
>>>>
>>>> (which
>>>>>
>>>>> implied queues are often long) as a goal, and think harder about the
>>>>> end-to-end problem of minimizing total end-to-end queueing delay
>>>>
>>>> while
>>>>>
>>>>> maximizing throughput.
>>>>
>>>>
>>>> As far as I can tell, this is exactly what CODEL and PIE tries to do.
>>>> They
>>>> try to find a decent tradeoff between having queues to make sure the
>>>> pipe
>>>> is filled, and not making these queues big enough to seriously affect
>>>> interactive performance.
>>>>
>>>> The latter part looks like what LEDBAT does?
>>>> <http://tools.ietf.org/html/rfc6817>
>>>>
>>>> Or are you thinking about something else?
>>>
>>>
>>> -- Sent from my Android device with K-@ Mail. Please excuse my brevity.
>>
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>> _______________________________________________
>> Cerowrt-devel mailing list
>> Cerowrt-devel at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/cerowrt-devel
>>
>
>
>
>



More information about the Cerowrt-devel mailing list