Lets make wifi fast again!
 help / color / mirror / Atom feed
From: Dave Taht <dave.taht@gmail.com>
To: Bob McMahon <bob.mcmahon@broadcom.com>
Cc: David Lang <david@lang.hm>,
	 "ath9k-devel@lists.ath9k.org" <ath9k-devel@venema.h4ckr.net>,
	make-wifi-fast@lists.bufferbloat.net
Subject: Re: [Make-wifi-fast] Diagram of the ath9k TX path
Date: Fri, 13 May 2016 10:49:51 -0700	[thread overview]
Message-ID: <CAA93jw6T=5KHYKT1F+Uo-VH85QMpfBxgHZYLhUhH+Tnuj76D+g@mail.gmail.com> (raw)
In-Reply-To: <CAHb6Lvoe2ZB0mXSRPywwj2LVMwi7LdtWifd=sYkTZmZKrqmA0w@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 4497 bytes --]

I try to stress that single tcp flows should never use all the bandwidth
for the sawtooth to function properly.

What happens when you hit it with 4 flows? or 12?

nice graph, but I don't understand the single blue spikes?

On Fri, May 13, 2016 at 10:46 AM, Bob McMahon <bob.mcmahon@broadcom.com>
wrote:

> On driver delays, from a driver development perspective the problem isn't
> to add delay or not (it shouldn't) it's that the TCP stack isn't presenting
> sufficient data to fully utilize aggregation.  Below is a histogram
> comparing aggregations of 3 systems (units are mpdu per ampdu.)  The lowest
> latency stack is in purple and it's also the worst performance with respect
> to average throughput.   From a driver perspective, one would like TCP to
> present sufficient bytes into the pipe that the histogram leans toward the
> blue.
>
> [image: Inline image 1]
> I'm not an expert on TCP near congestion avoidance but maybe the algorithm
> could benefit from RTT as weighted by CWND (or bytes in flight) and hunt
> that maximum?
>
> Bob
>
> On Mon, May 9, 2016 at 8:41 PM, David Lang <david@lang.hm> wrote:
>
>> On Mon, 9 May 2016, Dave Taht wrote:
>>
>> On Mon, May 9, 2016 at 7:25 PM, Jonathan Morton <chromatix99@gmail.com>
>>> wrote:
>>>
>>>>
>>>> On 9 May, 2016, at 18:35, Dave Taht <dave.taht@gmail.com> wrote:
>>>>>
>>>>> should we always wait a little bit to see if we can form an aggregate?
>>>>>
>>>>
>>>> I thought the consensus on this front was “no”, as long as we’re making
>>>> the decision when we have an immediate transmit opportunity.
>>>>
>>>
>>> I think it is more nuanced than how david lang has presented it.
>>>
>>
>> I have four reasons for arguing for no speculative delays.
>>
>> 1. airtime that isn't used can't be saved.
>>
>> 2. lower best-case latency
>>
>> 3. simpler code
>>
>> 4. clean, and gradual service degredation under load.
>>
>> the arguments against are:
>>
>> 5. throughput per ms of transmit time is better if aggregation happens
>> than if it doesn't.
>>
>> 6. if you don't transmit, some other station may choose to before you
>> would have finished.
>>
>> #2 is obvious, but with the caviot that anytime you transmit you may be
>> delaying someone else.
>>
>> #1 and #6 are flip sides of each other. we want _someone_ to use the
>> airtime, the question is who.
>>
>> #3 and #4 are closely related.
>>
>> If you follow my approach (transmit immediately if you can, aggregate
>> when you have a queue), the code really has one mode (plus queuing). "If
>> you have a Transmit Oppertunity, transmit up to X packets from the queue",
>> and it doesn't matter if it's only one packet.
>>
>> If you delay the first packet to give you a chance to aggregate it with
>> others, you add in the complexity and overhead of timers (including
>> cancelling timers, slippage in timers, etc) and you add "first packet,
>> start timers" mode to deal with.
>>
>> I grant you that the first approach will "saturate" the airtime at lower
>> traffic levels, but at that point all the stations will start aggregating
>> the minimum amount needed to keep the air saturated, while still minimizing
>> latency.
>>
>> I then expect that application related optimizations would then further
>> complicate the second approach. there are just too many cases where small
>> amounts of data have to be sent and other things serialize behind them.
>>
>> DNS lookup to find a domain to then to a 3-way handshake to then do a
>> request to see if the <web something> library has been updated since last
>> cached (repeat for several libraries) to then fetch the actual page
>> content. All of these thing up to the actual page content could be single
>> packets that have to be sent (and responded to with a single packet),
>> waiting for the prior one to complete. If you add a few ms to each of
>> these, you can easily hit 100ms in added latency. Once you start to try and
>> special cases these sorts of things, the code complexity multiplies.
>>
>> So I believe that the KISS approach ends up with a 'worse is better'
>> situation.
>>
>> David Lang
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>
>>
>


-- 
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org

[-- Attachment #1.2: Type: text/html, Size: 6249 bytes --]

[-- Attachment #2: image.png --]
[-- Type: image/png, Size: 27837 bytes --]

  reply	other threads:[~2016-05-13 17:49 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-09 11:00 Toke Høiland-Jørgensen
2016-05-09 15:35 ` Dave Taht
2016-05-10  2:25   ` Jonathan Morton
2016-05-10  2:59     ` Dave Taht
2016-05-10  3:30       ` [Make-wifi-fast] [ath9k-devel] " Adrian Chadd
2016-05-10  4:04         ` Dave Taht
2016-05-10  4:22           ` Aaron Wood
2016-05-10  7:15           ` Adrian Chadd
2016-05-10  7:17             ` Adrian Chadd
2016-05-10  3:41       ` [Make-wifi-fast] " David Lang
2016-05-10  4:59         ` Dave Taht
2016-05-10  5:22           ` David Lang
2016-05-10  9:04             ` Toke Høiland-Jørgensen
2016-05-11 14:12               ` Dave Täht
2016-05-11 15:09                 ` Dave Taht
2016-05-11 15:20                   ` Toke Høiland-Jørgensen
2016-05-13 17:46         ` Bob McMahon
2016-05-13 17:49           ` Dave Taht [this message]
2016-05-13 18:05             ` Bob McMahon
2016-05-13 18:11               ` Bob McMahon
2016-05-13 18:57               ` Dave Taht
2016-05-13 19:20                 ` Aaron Wood
2016-05-13 20:21                   ` Dave Taht
2016-05-13 20:51                     ` Dave Taht
2016-05-13 20:49           ` David Lang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/make-wifi-fast.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA93jw6T=5KHYKT1F+Uo-VH85QMpfBxgHZYLhUhH+Tnuj76D+g@mail.gmail.com' \
    --to=dave.taht@gmail.com \
    --cc=ath9k-devel@venema.h4ckr.net \
    --cc=bob.mcmahon@broadcom.com \
    --cc=david@lang.hm \
    --cc=make-wifi-fast@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox