lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <daad6ba2-6916-3923-c116-d0470920fe1a@nbd.name>
Date:   Sun, 26 Jul 2020 19:19:03 +0200
From:   Felix Fietkau <nbd@....name>
To:     Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org
Cc:     Hillf Danton <hdanton@...a.com>
Subject: Re: [RFC] net: add support for threaded NAPI polling

On 2020-07-26 18:49, Eric Dumazet wrote:
> On 7/26/20 9:31 AM, Felix Fietkau wrote:
>> For some drivers (especially 802.11 drivers), doing a lot of work in the NAPI
>> poll function does not perform well. Since NAPI poll is bound to the CPU it
>> was scheduled from, we can easily end up with a few very busy CPUs spending
>> most of their time in softirq/ksoftirqd and some idle ones.
>> 
>> Introduce threaded NAPI for such drivers based on a workqueue. The API is the
>> same except for using netif_threaded_napi_add instead of netif_napi_add.
>> 
>> In my tests with mt76 on MT7621 using threaded NAPI + a thread for tx scheduling
>> improves LAN->WLAN bridging throughput by 10-50%. Throughput without threaded
>> NAPI is wildly inconsistent, depending on the CPU that runs the tx scheduling
>> thread.
>> 
>> With threaded NAPI, throughput seems stable and consistent (and higher than
>> the best results I got without it).
> 
> Note that even with a threaded NAPI, you will not be able to use more than one cpu
> to process the traffic.
For a single threaded NAPI user that's correct. The main difference here
is that the CPU running the poll function does not have to be the same
as the CPU that scheduled it, and it can change based on the load.
That makes a huge difference in my tests.

> Also I wonder how this will scale to more than one device using this ?
The workqueue creates multiple workers that pick up poll work, so it
should scale nicely.

> Say we need 4 NAPI, how the different work queues will mix together ?
> 
> We invented years ago RPS and RFS, to be able to spread incoming traffic
> to more cpus, for devices having one hardware queue.
Unfortunately that does not work well at all for my use case (802.11
drivers). A really large chunk of the work (e.g. 802.11 -> 802.3 header
conversion, state checks, etc.) is being done inside the poll function,
before it even goes anywhere near the network stack and RPS/RFS.

I did a lot of experiments trying to parallelize the work by tuning RFS,
IRQ affinity, etc. on MT7621. I didn't get anything close to the
consistent performance I get by adding threaded NAPI to mt76 along with
moving some other CPU intensive work from tasklets to threads.

- Felix

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ