lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 May 2021 18:46:16 +0200
From:   Yannick Vignon <yannick.vignon@....nxp.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Antoine Tenart <atenart@...nel.org>,
        Wei Wang <weiwan@...gle.com>, Taehee Yoo <ap420073@...il.com>,
        Alexander Lobakin <alobakin@...me>, netdev@...r.kernel.org,
        Giuseppe Cavallaro <peppe.cavallaro@...com>,
        Alexandre Torgue <alexandre.torgue@...s.st.com>,
        Jose Abreu <joabreu@...opsys.com>,
        Maxime Coquelin <mcoquelin.stm32@...il.com>,
        Joakim Zhang <qiangqing.zhang@....com>,
        sebastien.laveze@....nxp.com,
        Yannick Vignon <yannick.vignon@....com>
Subject: Re: [RFC PATCH net-next v1 0/2] Threaded NAPI configurability

On 5/7/2021 12:18 AM, Jakub Kicinski wrote:
> On Thu,  6 May 2021 19:20:19 +0200 Yannick Vignon wrote:
>> The purpose of these 2 patches is to be able to configure the scheduling
>> properties (e.g. affinity, priority...) of the NAPI threads more easily
>> at run-time, based on the hardware queues each thread is handling.
>> The main goal is really to expose which thread does what, as the current
>> naming doesn't exactly make that clear.
>>
>> Posting this as an RFC in case people have different opinions on how to
>> do that.
> 
> WQ <-> CQ <-> irq <-> napi mapping needs an exhaustive netlink
> interface. We've been saying this for a while. Neither hard coded
> naming schemes nor one-off sysfs files are a great idea IMHO.
> 

Could you elaborate on the kind of netlink interface you are thinking about?
We already have standard ways of configuring process priorities and 
affinities, what we need is rather to expose which queue(s) each NAPI 
thread/instance is responsible for (and as I just said, I fear this will 
involve driver changes).
Now, one place were a netlink API could be of use is for statistics: we 
currently do not have any per-queue counters, and that would be useful 
when working on multi-queue setups.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ