lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgK14ElTBW1BOXxW@slm.duckdns.org>
Date:   Tue, 8 Feb 2022 08:26:40 -1000
From:   Tejun Heo <tj@...nel.org>
To:     Imran Khan <imran.f.khan@...cle.com>
Cc:     gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 2/2] kernfs: Replace per-fs global rwsem with per-fs
 hashed rwsem.

Hello,

On Sun, Feb 06, 2022 at 12:09:25PM +1100, Imran Khan wrote:
> Having a single rwsem to synchronize all operations across a kernfs
> based file system (cgroup, sysfs etc.) does not scale well. Replace
> it with a hashed rwsem to reduce contention around single per-fs
> rwsem.
> Also introduce a perfs rwsem to protect per-fs list of kernfs_super_info.

Can you split the two conversions into separate patches? Also, I'm not sure
about the per-fs hashtable as before.

>  static bool kernfs_active(struct kernfs_node *kn)
>  {
> -	lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem);
> +	int idx = hash_ptr(kn, NR_KERNFS_LOCK_BITS);
> +
> +	lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem[idx]);

Please encapsulate this into a function. If possible, it'd be ideal if the
conversion can be done in steps - e.g. first introduce lock encapsulation
interface while leaving locking itself alone and then switch the actual
locking.

> -static void kernfs_drain(struct kernfs_node *kn)
> -	__releases(&kernfs_root(kn)->kernfs_rwsem)
> -	__acquires(&kernfs_root(kn)->kernfs_rwsem)
> +static void kernfs_drain(struct kernfs_node *kn, struct kernfs_node *anc)
> +	__releases(&kernfs_root(anc)->kernfs_rwsem[a_idx])
> +	__acquires(&kernfs_root(anc)->kernfs_rwsem[a_idx])
>  {
>  	struct kernfs_root *root = kernfs_root(kn);
> +	int a_idx = hash_ptr(anc, NR_KERNFS_LOCK_BITS);
>  
> -	lockdep_assert_held_write(&root->kernfs_rwsem);
> -	WARN_ON_ONCE(kernfs_active(kn));
> +	lockdep_assert_held_write(&root->kernfs_rwsem[a_idx]);
> +	WARN_ON_ONCE(atomic_read(&kn->active) >= 0);

Ditto, I'd much prefer to see the lock lookup and accompanying operations to
be encapsulated somehow.

> @@ -1588,37 +1597,65 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
>  		     const char *new_name, const void *new_ns)
>  {
>  	struct kernfs_node *old_parent;
> -	struct kernfs_root *root;
>  	const char *old_name = NULL;
> -	int error;
> +	int error, idx, np_idx, p_idx;
>  
>  	/* can't move or rename root */
>  	if (!kn->parent)
>  		return -EINVAL;
>  
> -	root = kernfs_root(kn);
> -	down_write(&root->kernfs_rwsem);
> +	/*
> +	 * Take lock of node's old (current) parent.
> +	 * If new parent has a different lock, then take that
> +	 * lock as well.
> +	 */
> +	idx = hash_ptr(kn, NR_KERNFS_LOCK_BITS);
> +	p_idx = hash_ptr(kn->parent, NR_KERNFS_LOCK_BITS);
> +	np_idx = hash_ptr(new_parent, NR_KERNFS_LOCK_BITS);
> +
> +	/*
> +	 * Take only kn's lock. The subsequent kernfs_put
> +	 * may free up old_parent so if old_parent has a
> +	 * different lock, we will explicitly release that.
> +	 */
> +	down_write_kernfs_rwsem(kn, LOCK_SELF, 0);
> +
> +	if (idx != np_idx) /* new parent hashes to different lock */
> +		down_write_kernfs_rwsem(new_parent, LOCK_SELF, 1);
> +
> +	/* old_parent hashes to a different lock */
> +	if (idx != p_idx && p_idx != np_idx)
> +		down_write_kernfs_rwsem(kn->parent, LOCK_SELF, 2);

Can't this lead to ABBA deadlock? When double locking, the locking order
should always be consistent. If we were doing per-kernfs_node lock, child ->
parent ordering works but we're hashing locks, so that doesn't work anymore
- one child-parent combo can lock A then B while the other child-parent
combo hash the other way around and lock B then A. The only order we can
define is in terms of the locks themselves - e.g. if the address (or index)
of lock A < lock B, then we lock A first whether that maps to the child or
parent.

Also, please encapsulate double locking in a set of functions. We really
don't wanna see all the details in the users.

> --- a/fs/kernfs/kernfs-internal.h
> +++ b/fs/kernfs/kernfs-internal.h
> @@ -19,6 +19,9 @@
>  #include <linux/kernfs.h>
>  #include <linux/fs_context.h>
>  
> +#define LOCK_SELF 0
> +#define LOCK_SELF_AND_PARENT 1

I get that this is private header but can you please add some identifying
prefix and make them enums? That makes it way easier for debuggers, bpf and
tracing.

> +/*
> + * If both node and it's parent need locking,
> + * lock child first so that kernfs_rename_ns
> + * does not change the parent, leaving us
> + * with old parent here.
> + */

Please reflow it close to 80 chars and ditto with above. We can't follow
child -> parent order with hashed locks.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ