lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260130024410.2491051-1-realwujing@gmail.com>
Date: Thu, 29 Jan 2026 21:44:06 -0500
From: Qiliang Yuan <realwujing@...il.com>
To: kuniyu@...gle.com
Cc: brauner@...nel.org,
	davem@...emloft.net,
	edumazet@...gle.com,
	horms@...nel.org,
	jlayton@...nel.org,
	kuba@...nel.org,
	linux-kernel@...r.kernel.org,
	netdev@...r.kernel.org,
	pabeni@...hat.com,
	realwujing@...il.com,
	sd@...asysnail.net,
	yuanql9@...natelecom.cn
Subject: Re: [PATCH net-next v3] netns: optimize netns cleaning by batching unhash_nsid calls

Hi Kuniyuki,

Thank you for your valuable feedback!

On  Wed, 28 Jan 2026 09:13:59 -0800 Kuniyuki Iwashima <kuniyu@...gle.com> wrote:
> idr_for_each() sounds better to me.

I have integrated this in v4. It indeed makes the IDR traversal much more 
idiomatic and efficient.

> If we replace list_del_rcu(&net->list); with
> list_del_init_rcu(&net->list);, we can check net->list.pprev
> instead of adding dying_net, which is a bit racy since
> idr_for_each() could return a net which would have been
> processed in the next cleanup_net() invocation.

To resolve this race, I've moved the setting of 'is_dying = true' inside 
cleanup_net() while still holding the net_rwsem write lock. This ensures 
all namespaces in the current kill_list are marked before we release the 
lock and perform the batch unhashing.

Regarding list_del_init_rcu(), as it is not a standard API, I evaluated 
using list_del_rcu() followed by INIT_LIST_HEAD(). However, resetting 
list pointers is generally unsafe for RCU readers (e.g., in for_each_net_rcu), 
as it could cause them to enter an infinite loop. Using the 'is_dying' 
boolean under the existing lock seems to be the safest and simplest approach.

I've also cleaned up the redundant nsid_lock and the duplicate Signed-off-by 
tags as you suggested.

The complexity is now O(N_system * N_ids), effectively eliminating the 
O(M_batch) multiplier.

I've just sent out the v4 patch. Looking forward to your thoughts.

Thanks,
Qiliang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ