lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <3384c99b-de8d-15d5-b470-b1b56e4b4770@huawei.com> Date: Tue, 4 Apr 2023 11:18:24 +0800 From: Yunsheng Lin <linyunsheng@...wei.com> To: Jakub Kicinski <kuba@...nel.org> CC: <davem@...emloft.net>, <netdev@...r.kernel.org>, <edumazet@...gle.com>, <pabeni@...hat.com>, <hawk@...nel.org>, <ilias.apalodimas@...aro.org> Subject: Re: [RFC net-next 1/2] page_pool: allow caching from safely localized NAPI On 2023/4/4 9:45, Jakub Kicinski wrote: > On Tue, 4 Apr 2023 08:53:36 +0800 Yunsheng Lin wrote: >> I wonder if we can make this more generic by adding the skb to per napi >> list instead of sd->defer_list, so that we can always use NAPI kicking to >> flush skb as net_tx_action() done for sd->completion_queue instead of >> softirq kicking? >> >> And it seems we know which napi binds to a specific socket through >> busypoll mechanism, we can reuse that to release a skb to the napi >> bound to that socket? > > Seems doable. My thinking was to first see how well the simpler scheme > performs with production workloads because it should have no downsides. Look forwording to some performs data with production workloads:) > Tracking real NAPI pointers per socket and extra RCU sync to manage > per-NAPI defer queues may have perf cost. I suppose the extra RCU sync only happen on the napi add/del process, not in the data path? > . >
Powered by blists - more mailing lists