lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2ce6de8b-f520-3f09-746a-caf2ecab428a@gmail.com>
Date:   Thu, 6 Aug 2020 12:25:08 -0700
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Jakub Kicinski <kuba@...nel.org>, Felix Fietkau <nbd@....name>
Cc:     netdev@...r.kernel.org, Eric Dumazet <eric.dumazet@...il.com>,
        Hillf Danton <hdanton@...a.com>
Subject: Re: [PATCH v2] net: add support for threaded NAPI polling



On 8/6/20 11:55 AM, Jakub Kicinski wrote:
> On Thu,  6 Aug 2020 11:55:58 +0200 Felix Fietkau wrote:
>> For some drivers (especially 802.11 drivers), doing a lot of work in the NAPI
>> poll function does not perform well. Since NAPI poll is bound to the CPU it
>> was scheduled from, we can easily end up with a few very busy CPUs spending
>> most of their time in softirq/ksoftirqd and some idle ones.
>>
>> Introduce threaded NAPI for such drivers based on a workqueue. The API is the
>> same except for using netif_threaded_napi_add instead of netif_napi_add.
>>
>> In my tests with mt76 on MT7621 using threaded NAPI + a thread for tx scheduling
>> improves LAN->WLAN bridging throughput by 10-50%. Throughput without threaded
>> NAPI is wildly inconsistent, depending on the CPU that runs the tx scheduling
>> thread.
>>
>> With threaded NAPI, throughput seems stable and consistent (and higher than
>> the best results I got without it).
> 
> I'm still trying to wrap my head around this.
> 
> Am I understanding correctly that you have one IRQ and multiple NAPI
> instances?
> 
> Are we not going to end up with pretty terrible cache locality here if
> the scheduler starts to throw rx and tx completions around to random
> CPUs?
> 
> I understand that implementing separate kthreads would be more LoC, but
> we do have ksoftirqs already... maybe we should make the NAPI ->
> ksoftirq mapping more flexible, and improve the logic which decides to
> load ksoftirq rather than make $current() pay?
> 
> Sorry for being slow.
> 


Issue with ksoftirqd is that
- it is bound to a cpu
- Its nice value is 0, meaning that user threads can sometime compete too much with it.
- It handles all kinds of softirqs, so messing with it might hurt some other layer.

Note that the patch is using a dedicate work queue. It is going to be not practical
in case you need to handle two different NIC, and want separate pools for each of them.

Ideally, having one kthread per queue would be nice, but then there is more plumbing
work to let these kthreads being visible in a convenient way (/sys/class/net/ethX/queues/..../kthread)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ