[Make-wifi-fast] emulating wifi better - coupling qdiscs in netem?

Eric Dumazet eric.dumazet at gmail.com
Mon Jun 18 18:27:14 EDT 2018



On 06/18/2018 02:54 PM, Pete Heist wrote:
> 
>> On Jun 18, 2018, at 9:44 PM, Dave Taht <dave.taht at gmail.com <mailto:dave.taht at gmail.com>> wrote:
>>
>> This is still without batch releases, yes?
> 
> Yes, I should've tried that earlier, but I’m scratching my head now as to how it works. Perhaps it’s because the old example I’m using for the non-GSO case uses deprecated functions and I ought to just ditch it, but I thought if in my callback I just switched:
> 
> return nfq_set_verdict(qh, id, NF_ACCEPT, 0, NULL);
> 
> to
> 
> return nfq_set_verdict_batch(qh, id + 8, NF_ACCEPT);
> 
> that my callback might not be called for the subsequent 8 packets I’ve accepted, however it continues to be called for each id sequentially anyway and throughput is no better. If I change 8 to something unreasonable, like 1000000, throughput is cut in half, so it’s doing “something”.
> 
> There are functions in the newer GSO example like nfq_nlmsg_verdict_put, but I don’t see a batch version of that. So, I’m likely missing something…
> 
> BTW I don’t see a change setting SO_BUSY_POLL on nfq’s fd (tried 1000 - 1000000 usec).
>

busy polling does not request SO_BUSY_POLL.

Usually, we simply use non blocking system calls in a big loop.
( nl_socket_get_fd() -> then set the file in O_NDELAY mode)

SO_BUSY_POLL is a way to directly call the NAPI handler of the device (or more precisely RX queue)
feeding packets. This saves the hard interrupt latency.

For NFQUEUE, that would require a bit of plumbing I guess.

Each NF Queue would have to record (in the kernel) the NAPI id of intercepted packets.
-> A bit complicated since the number of RX queues on a NIC/host is quite variable.



More information about the Make-wifi-fast mailing list