Lets make wifi fast again!
 help / color / mirror / Atom feed
* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
       [not found]       ` <ff6b35ad589d7cf0710cb9fca4c799538da2e653.camel@sipsolutions.net>
@ 2019-12-12 23:42         ` Dave Taht
  2019-12-13  0:59           ` Simon Barber
  2019-12-13  8:08           ` Johannes Berg
  0 siblings, 2 replies; 6+ messages in thread
From: Dave Taht @ 2019-12-12 23:42 UTC (permalink / raw)
  To: Johannes Berg
  Cc: Eric Dumazet, Neal Cardwell, Toke Høiland-Jørgensen,
	linux-wireless, Netdev, Make-Wifi-fast

On Thu, Dec 12, 2019 at 1:12 PM Johannes Berg <johannes@sipsolutions.net> wrote:
>
> Hi Eric,
>
> Thanks for looking :)
>
> > > I'm not sure how to do headers-only, but I guess -s100 will work.
> > >
> > > https://johannes.sipsolutions.net/files/he-tcp.pcap.xz
> > >
> >
> > Lack of GRO on receiver is probably what is killing performance,
> > both for receiver (generating gazillions of acks) and sender
> > (to process all these acks)
> Yes, I'm aware of this, to some extent. And I'm not saying we should see
> even close to 1800 Mbps like we have with UDP...
>
> Mind you, the biggest thing that kills performance with many ACKs isn't
> the load on the system - the sender system is only moderately loaded at
> ~20-25% of a single core with TSO, and around double that without TSO.
> The thing that kills performance is eating up all the medium time with
> small non-aggregated packets, due to the the half-duplex nature of WiFi.
> I know you know, but in case somebody else is reading along :-)

I'm paying attention but pay attention faster if you cc make-wifi-fast.

If you captured the air you'd probably see the sender winning the
election for airtime 2 or more times in a row,
it's random and oft dependent on on a variety of factors.

Most Wifi is *not* "half" duplex, which implies it ping pongs between
send and receive.

>
> But unless somehow you think processing the (many) ACKs on the sender
> will cause it to stop transmitting, or something like that, I don't
> think I should be seeing what I described earlier: we sometimes (have
> to?) reclaim the entire transmit queue before TCP starts pushing data
> again. That's less than 2MB split across at least two TCP streams, I
> don't see why we should have to get to 0 (which takes about 7ms) until
> more packets come in from TCP?

Perhaps having a budget for ack processing within a 1ms window?

> Or put another way - if I free say 400kB worth of SKBs, what could be
> the reason we don't see more packets be sent out of the TCP stack within
> the few ms or so? I guess I have to correlate this somehow with the ACKs
> so I know how much data is outstanding for ACKs. (*)

yes.

It would be interesting to repeat this test in ht20 mode, and/or using

flent --socket-stats --step-size=.04 --te=upload_streams=2 -t
whatever_variant_of_test tcp_nup

That will capture some of the tcp stats for you.

>
> The sk_pacing_shift is set to 7, btw, which should give us 8ms of
> outstanding data. For now in this setup that's enough(**), and indeed
> bumping the limit up (setting sk_pacing_shift to say 5) doesn't change
> anything. So I think this part we actually solved - I get basically the
> same performance and behaviour with two streams (needed due to GBit LAN
> on the other side) as with 20 streams.
>
>
> > I had a plan about enabling compressing ACK as I did for SACK
> > in commit
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5d9f4262b7ea41ca9981cc790e37cca6e37c789e
> >
> > But I have not done it yet.
> > It is a pity because this would tremendously help wifi I am sure.
>
> Nice :-)
>
> But that is something the *receiver* would have to do.

Well it is certainly feasible to thin acks on the driver as we did in
cake. More general. More cpu intensive. I'm happily just awaiting
eric's work instead.

One thing comcast inadvertently does to most flows is remark them cs1,
which tosses big data into the bk queue and acks into the be queue. It
actually helps sometimes.

>
> The dirty secret here is that we're getting close to 1700 Mbps TCP with
> Windows in place of Linux in the setup, with the same receiver on the
> other end (which is actually a single Linux machine with two GBit
> network connections to the AP). So if we had this I'm sure it'd increase
> performance, but it still wouldn't explain why we're so much slower than
> Windows :-)
>
> Now, I'm certainly not saying that TCP behaviour is the only reason for
> the difference, we already found an issue for example where due to a
> small Windows driver bug some packet extension was always used, and the
> AP is also buggy in that it needs the extension but didn't request it
> ... so the two bugs cancelled each other out and things worked well, but
> our Linux driver believed the AP ... :) Certainly there can be more
> things like that still, I just started on the TCP side and ran into the
> queueing behaviour that I cannot explain.
>
>
> In any case, I'll try to dig deeper into the TCP stack to understand the
> reason for this transmit behaviour.
>
> Thanks,
> johannes
>
>
> (*) Hmm. Now I have another idea. Maybe we have some kind of problem
> with the medium access configuration, and we transmit all this data
> without the AP having a chance to send back all the ACKs? Too bad I
> can't put an air sniffer into the setup - it's a conductive setup.

see above
>
>
> (**) As another aside to this, the next generation HW after this will
> have 256 frames in a block-ack, so that means instead of up to 64 (we
> only use 63 for internal reasons) frames aggregated together we'll be
> able to aggregate 256 (or maybe we again only 255?).

My fervent wish is to somehow be able to mark every frame we can as not
needing a retransmit in future standards. I've lost track of what ax
can do. ? And for block ack retries
to give up far sooner.

you can safely drop all but the last three acks in a flow, and the
txop itself provides
a suitable clock.

And, ya know, releasing packets ooo doesn't hurt as much as it used
to, with rack.
> Each one of those
> frames may be an A-MSDU with ~11k content though (only 8k in the setup I
> have here right now), which means we can get a LOT of data into a single
> PPDU ...

Just wearing my usual hat, I would prefer to optimize for service
time, not bandwidth, in the future,
using smaller txops with this more data in them, than the biggest
txops possible.

If you constrain your max txop to 2ms in this test, you will see tcp
in slow start ramp up faster,
and the ap scale to way more devices, with way less jitter and
retries. Most flows never get out of slowstart.

> . we'll probably have to bump the sk_pacing_shift to be able to
> fill that with a single TCP stream, though since we run all our
> performance numbers with many streams, maybe we should just leave it :)

Please. Optimizing for single flow performance is an academic's game.

>
>


-- 
Make Music, Not War

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
  2019-12-12 23:42         ` [Make-wifi-fast] debugging TCP stalls on high-speed wifi Dave Taht
@ 2019-12-13  0:59           ` Simon Barber
  2019-12-13  1:46             ` Eric Dumazet
  2019-12-13  8:08           ` Johannes Berg
  1 sibling, 1 reply; 6+ messages in thread
From: Simon Barber @ 2019-12-13  0:59 UTC (permalink / raw)
  To: Dave Taht
  Cc: Johannes Berg, Make-Wifi-fast, linux-wireless, Netdev, Neal Cardwell

I’m currently adding ACK thinning to Linux’s GRO code. Quite a simple addition given the way that code works.

Simon


> On Dec 12, 2019, at 3:42 PM, Dave Taht <dave.taht@gmail.com> wrote:
> 
> On Thu, Dec 12, 2019 at 1:12 PM Johannes Berg <johannes@sipsolutions.net> wrote:
>> 
>> Hi Eric,
>> 
>> Thanks for looking :)
>> 
>>>> I'm not sure how to do headers-only, but I guess -s100 will work.
>>>> 
>>>> https://johannes.sipsolutions.net/files/he-tcp.pcap.xz
>>>> 
>>> 
>>> Lack of GRO on receiver is probably what is killing performance,
>>> both for receiver (generating gazillions of acks) and sender
>>> (to process all these acks)
>> Yes, I'm aware of this, to some extent. And I'm not saying we should see
>> even close to 1800 Mbps like we have with UDP...
>> 
>> Mind you, the biggest thing that kills performance with many ACKs isn't
>> the load on the system - the sender system is only moderately loaded at
>> ~20-25% of a single core with TSO, and around double that without TSO.
>> The thing that kills performance is eating up all the medium time with
>> small non-aggregated packets, due to the the half-duplex nature of WiFi.
>> I know you know, but in case somebody else is reading along :-)
> 
> I'm paying attention but pay attention faster if you cc make-wifi-fast.
> 
> If you captured the air you'd probably see the sender winning the
> election for airtime 2 or more times in a row,
> it's random and oft dependent on on a variety of factors.
> 
> Most Wifi is *not* "half" duplex, which implies it ping pongs between
> send and receive.
> 
>> 
>> But unless somehow you think processing the (many) ACKs on the sender
>> will cause it to stop transmitting, or something like that, I don't
>> think I should be seeing what I described earlier: we sometimes (have
>> to?) reclaim the entire transmit queue before TCP starts pushing data
>> again. That's less than 2MB split across at least two TCP streams, I
>> don't see why we should have to get to 0 (which takes about 7ms) until
>> more packets come in from TCP?
> 
> Perhaps having a budget for ack processing within a 1ms window?
> 
>> Or put another way - if I free say 400kB worth of SKBs, what could be
>> the reason we don't see more packets be sent out of the TCP stack within
>> the few ms or so? I guess I have to correlate this somehow with the ACKs
>> so I know how much data is outstanding for ACKs. (*)
> 
> yes.
> 
> It would be interesting to repeat this test in ht20 mode, and/or using
> 
> flent --socket-stats --step-size=.04 --te=upload_streams=2 -t
> whatever_variant_of_test tcp_nup
> 
> That will capture some of the tcp stats for you.
> 
>> 
>> The sk_pacing_shift is set to 7, btw, which should give us 8ms of
>> outstanding data. For now in this setup that's enough(**), and indeed
>> bumping the limit up (setting sk_pacing_shift to say 5) doesn't change
>> anything. So I think this part we actually solved - I get basically the
>> same performance and behaviour with two streams (needed due to GBit LAN
>> on the other side) as with 20 streams.
>> 
>> 
>>> I had a plan about enabling compressing ACK as I did for SACK
>>> in commit
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5d9f4262b7ea41ca9981cc790e37cca6e37c789e
>>> 
>>> But I have not done it yet.
>>> It is a pity because this would tremendously help wifi I am sure.
>> 
>> Nice :-)
>> 
>> But that is something the *receiver* would have to do.
> 
> Well it is certainly feasible to thin acks on the driver as we did in
> cake. More general. More cpu intensive. I'm happily just awaiting
> eric's work instead.
> 
> One thing comcast inadvertently does to most flows is remark them cs1,
> which tosses big data into the bk queue and acks into the be queue. It
> actually helps sometimes.
> 
>> 
>> The dirty secret here is that we're getting close to 1700 Mbps TCP with
>> Windows in place of Linux in the setup, with the same receiver on the
>> other end (which is actually a single Linux machine with two GBit
>> network connections to the AP). So if we had this I'm sure it'd increase
>> performance, but it still wouldn't explain why we're so much slower than
>> Windows :-)
>> 
>> Now, I'm certainly not saying that TCP behaviour is the only reason for
>> the difference, we already found an issue for example where due to a
>> small Windows driver bug some packet extension was always used, and the
>> AP is also buggy in that it needs the extension but didn't request it
>> ... so the two bugs cancelled each other out and things worked well, but
>> our Linux driver believed the AP ... :) Certainly there can be more
>> things like that still, I just started on the TCP side and ran into the
>> queueing behaviour that I cannot explain.
>> 
>> 
>> In any case, I'll try to dig deeper into the TCP stack to understand the
>> reason for this transmit behaviour.
>> 
>> Thanks,
>> johannes
>> 
>> 
>> (*) Hmm. Now I have another idea. Maybe we have some kind of problem
>> with the medium access configuration, and we transmit all this data
>> without the AP having a chance to send back all the ACKs? Too bad I
>> can't put an air sniffer into the setup - it's a conductive setup.
> 
> see above
>> 
>> 
>> (**) As another aside to this, the next generation HW after this will
>> have 256 frames in a block-ack, so that means instead of up to 64 (we
>> only use 63 for internal reasons) frames aggregated together we'll be
>> able to aggregate 256 (or maybe we again only 255?).
> 
> My fervent wish is to somehow be able to mark every frame we can as not
> needing a retransmit in future standards. I've lost track of what ax
> can do. ? And for block ack retries
> to give up far sooner.
> 
> you can safely drop all but the last three acks in a flow, and the
> txop itself provides
> a suitable clock.
> 
> And, ya know, releasing packets ooo doesn't hurt as much as it used
> to, with rack.
>> Each one of those
>> frames may be an A-MSDU with ~11k content though (only 8k in the setup I
>> have here right now), which means we can get a LOT of data into a single
>> PPDU ...
> 
> Just wearing my usual hat, I would prefer to optimize for service
> time, not bandwidth, in the future,
> using smaller txops with this more data in them, than the biggest
> txops possible.
> 
> If you constrain your max txop to 2ms in this test, you will see tcp
> in slow start ramp up faster,
> and the ap scale to way more devices, with way less jitter and
> retries. Most flows never get out of slowstart.
> 
>> . we'll probably have to bump the sk_pacing_shift to be able to
>> fill that with a single TCP stream, though since we run all our
>> performance numbers with many streams, maybe we should just leave it :)
> 
> Please. Optimizing for single flow performance is an academic's game.
> 
>> 
>> 
> 
> 
> -- 
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/make-wifi-fast


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
  2019-12-13  0:59           ` Simon Barber
@ 2019-12-13  1:46             ` Eric Dumazet
  2019-12-13  1:57               ` Simon Barber
  2019-12-13  4:42               ` Dave Taht
  0 siblings, 2 replies; 6+ messages in thread
From: Eric Dumazet @ 2019-12-13  1:46 UTC (permalink / raw)
  To: Simon Barber, Dave Taht
  Cc: Make-Wifi-fast, Johannes Berg, linux-wireless, Neal Cardwell, Netdev



On 12/12/19 4:59 PM, Simon Barber wrote:
> I’m currently adding ACK thinning to Linux’s GRO code. Quite a simple addition given the way that code works.
> 
> Simon
> 
>

Please don't.

1) It will not help since many NIC  do not use GRO.

2) This does not help if you receive one ACK per NIC interrupt, which is quite common.

3) This breaks GRO transparency.

4) TCP can implement this in a more effective/controlled way,
   since the peer know a lot more flow characteristics.

Middle-box should not try to make TCP better, they usually break things.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
  2019-12-13  1:46             ` Eric Dumazet
@ 2019-12-13  1:57               ` Simon Barber
  2019-12-13  4:42               ` Dave Taht
  1 sibling, 0 replies; 6+ messages in thread
From: Simon Barber @ 2019-12-13  1:57 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Dave Taht, Make-Wifi-fast, Johannes Berg, linux-wireless,
	Neal Cardwell, Netdev

In my application this is a bridge or router (not TCP endpoint), and the driver is doing GRO and NAPI polling. Also looking at using the skb->fraglist to make the GRO code more effective and more transparent by passing flags, short segments, etc through for perfect reconstruction by TSO.

Simon

> On Dec 12, 2019, at 5:46 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> 
> 
> On 12/12/19 4:59 PM, Simon Barber wrote:
>> I’m currently adding ACK thinning to Linux’s GRO code. Quite a simple addition given the way that code works.
>> 
>> Simon
>> 
>> 
> 
> Please don't.
> 
> 1) It will not help since many NIC  do not use GRO.
> 
> 2) This does not help if you receive one ACK per NIC interrupt, which is quite common.
> 
> 3) This breaks GRO transparency.
> 
> 4) TCP can implement this in a more effective/controlled way,
>   since the peer know a lot more flow characteristics.
> 
> Middle-box should not try to make TCP better, they usually break things.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
  2019-12-13  1:46             ` Eric Dumazet
  2019-12-13  1:57               ` Simon Barber
@ 2019-12-13  4:42               ` Dave Taht
  1 sibling, 0 replies; 6+ messages in thread
From: Dave Taht @ 2019-12-13  4:42 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Simon Barber, Make-Wifi-fast, Johannes Berg, linux-wireless,
	Neal Cardwell, Netdev

On Thu, Dec 12, 2019 at 5:46 PM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
>
>
> On 12/12/19 4:59 PM, Simon Barber wrote:
> > I’m currently adding ACK thinning to Linux’s GRO code. Quite a simple addition given the way that code works.
> >
> > Simon
> >
> >
>
> Please don't.
>
> 1) It will not help since many NIC  do not use GRO.
>
> 2) This does not help if you receive one ACK per NIC interrupt, which is quite common.

Packets accumulate in the wifi device and driver, if that's the bottleneck.

>
> 3) This breaks GRO transparency.
>
> 4) TCP can implement this in a more effective/controlled way,
>    since the peer know a lot more flow characteristics.
>
> Middle-box should not try to make TCP better, they usually break things.

I generally have more hope for open source attempts at this than other
means. And there isn't much left
in TCP that will change in the future; it is an ossified protocol.

802.11n, at least, has a problem fitting many packets into an
aggregate. Sending less packets is a win
in multiple ways:

A) Improves bi-directional throughput
B) Reduces the size of the receivers txop (and retries) - the client
is also often running at a lower rate than
the ap.
C) Delivers the most current ack, sooner

When further transiting an aqm that uses random numbers, it hits the
right packet sooner, also.

I welcome experimentation in this area.



-- 
Make Music, Not War

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-435-0729

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi
  2019-12-12 23:42         ` [Make-wifi-fast] debugging TCP stalls on high-speed wifi Dave Taht
  2019-12-13  0:59           ` Simon Barber
@ 2019-12-13  8:08           ` Johannes Berg
  1 sibling, 0 replies; 6+ messages in thread
From: Johannes Berg @ 2019-12-13  8:08 UTC (permalink / raw)
  To: Dave Taht
  Cc: Eric Dumazet, Neal Cardwell, Toke Høiland-Jørgensen,
	linux-wireless, Netdev, Make-Wifi-fast

On Thu, 2019-12-12 at 15:42 -0800, Dave Taht wrote:

> If you captured the air you'd probably see the sender winning the
> election for airtime 2 or more times in a row,
> it's random and oft dependent on on a variety of factors.

I'm going to try to capture more details - I can probably extract this
out of the firmware but it's more effort.

> Most Wifi is *not* "half" duplex, which implies it ping pongs between
> send and receive.

That's an interesting definition of "half duplex" which doesn't really
match anything that I've seen used or in the literature? What you're
describing sounds more like some sort of "half duplex with token-based
flow control" or something like that to me ...

> > But unless somehow you think processing the (many) ACKs on the sender
> > will cause it to stop transmitting, or something like that, I don't
> > think I should be seeing what I described earlier: we sometimes (have
> > to?) reclaim the entire transmit queue before TCP starts pushing data
> > again. That's less than 2MB split across at least two TCP streams, I
> > don't see why we should have to get to 0 (which takes about 7ms) until
> > more packets come in from TCP?
> 
> Perhaps having a budget for ack processing within a 1ms window?

What do you mean? There's such a budget? What kind of budget? I have
plenty of CPU time left, as far as I can tell.

> It would be interesting to repeat this test in ht20 mode,

Why? HT20 is far slower, what would be the advantage. In my experience I
don't hit this until I get to HE80.

> flent --socket-stats --step-size=.04 --te=upload_streams=2 -t
> whatever_variant_of_test tcp_nup
> 
> That will capture some of the tcp stats for you.

I guses I can try, but the upload_streams=2 won't actually help - I need
to run towards two different IP addresses - remember that I'm otherwise
limited by a GBit LAN link on the other side right now.

> > But that is something the *receiver* would have to do.
> 
> Well it is certainly feasible to thin acks on the driver as we did in
> cake.

I really don't think it would help in my case, either the ACKs are the
problem (which I doubt) and then they're the problem on the air, or
they're not the problem since I have plenty of CPU time to waste on them
...

> One thing comcast inadvertently does to most flows is remark them cs1,
> which tosses big data into the bk queue and acks into the be queue. It
> actually helps sometimes.

I thought about doing this but if I make my flows BK it halves my
throughput (perhaps due to the more then double AIFSN?)

> > (**) As another aside to this, the next generation HW after this will
> > have 256 frames in a block-ack, so that means instead of up to 64 (we
> > only use 63 for internal reasons) frames aggregated together we'll be
> > able to aggregate 256 (or maybe we again only 255?).
> 
> My fervent wish is to somehow be able to mark every frame we can as not
> needing a retransmit in future standards.

This can be done since ... oh I don't know, probably 2005 with the
802.11e amendment? Not sure off the top of my head how it interacts with
A-MPDUs though, and probably has bugs if you do that.

> I've lost track of what ax
> can do. ? And for block ack retries
> to give up far sooner.

You can do that too, it's just a local configuration how much you try
each packet. If you give up you leave a hole in the reorder window, but
if you start sending packets that are further ahead then the window, the
old ones will (have to be) released regardless.

> you can safely drop all but the last three acks in a flow, and the
> txop itself provides
> a suitable clock.

Now that's more tricky because once you stick the packets into the
hardware queue you likely have to decide whether or not they're
important.

I can probably think of ways of working around that (similar to the
table-based rate scaling we use), but it's tricky.

> And, ya know, releasing packets ooo doesn't hurt as much as it used
> to, with rack.

:)
That I think is not currently possible with A-MPDUs. It'd also still
have to be opt-in per frame since you can't really do that for anything
but TCP (and probably QUIC? Maybe SCTP?)

> Just wearing my usual hat, I would prefer to optimize for service
> time, not bandwidth, in the future,
> using smaller txops with this more data in them, than the biggest
> txops possible.

Patience. We're getting there now. HE will allow the AP to schedule
everything, and then you don't need TXOPs anymore. The problem is that
winning a TXOP is costly, so you *need* to put as much as possible into
it for good performance.

With HE and the AP scheduling, you win some, you lose some. The client
will lose the ability to actually make any decisions about its transmit
rate and things like that, but the AP can schedule & poll the clients
better without all the overhead.

> If you constrain your max txop to 2ms in this test, you will see tcp
> in slow start ramp up faster,
> and the ap scale to way more devices, with way less jitter and
> retries. Most flows never get out of slowstart.

I'm running a client ... you're forgetting that there's something else
that's actually talking to the AP you're thinking of :-)

> > . we'll probably have to bump the sk_pacing_shift to be able to
> > fill that with a single TCP stream, though since we run all our
> > performance numbers with many streams, maybe we should just leave it :)
> 
> Please. Optimizing for single flow performance is an academic's game.

Same here, kinda.

johannes


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-12-13  8:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <14cedbb9300f887fecc399ebcdb70c153955f876.camel@sipsolutions.net>
     [not found] ` <CADVnQym_CNktZ917q0-9dVY9dhtiJVRRotGTrPNdZUpkjd3vyw@mail.gmail.com>
     [not found]   ` <f4670ce0f4399fe82e7168fb9c491d8eb718e8d8.camel@sipsolutions.net>
     [not found]     ` <99748db5-7898-534b-d407-ed819f07f939@gmail.com>
     [not found]       ` <ff6b35ad589d7cf0710cb9fca4c799538da2e653.camel@sipsolutions.net>
2019-12-12 23:42         ` [Make-wifi-fast] debugging TCP stalls on high-speed wifi Dave Taht
2019-12-13  0:59           ` Simon Barber
2019-12-13  1:46             ` Eric Dumazet
2019-12-13  1:57               ` Simon Barber
2019-12-13  4:42               ` Dave Taht
2019-12-13  8:08           ` Johannes Berg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox