lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Feb 2022 23:27:34 +1100
From:   Imran Khan <imran.f.khan@...cle.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 2/2] kernfs: Replace per-fs global rwsem with per-fs
 hashed rwsem.

Hi Tejun,

On 9/2/22 5:26 am, Tejun Heo wrote:
> Hello,
> 
> On Sun, Feb 06, 2022 at 12:09:25PM +1100, Imran Khan wrote:
>> Having a single rwsem to synchronize all operations across a kernfs
>> based file system (cgroup, sysfs etc.) does not scale well. Replace
>> it with a hashed rwsem to reduce contention around single per-fs
>> rwsem.
>> Also introduce a perfs rwsem to protect per-fs list of kernfs_super_info.
> 
> Can you split the two conversions into separate patches? Also, I'm not sure
> about the per-fs hashtable as before.
> 

Yes. I have done this in v6 of patchset at [1].

>>  static bool kernfs_active(struct kernfs_node *kn)
>>  {
>> -	lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem);
>> +	int idx = hash_ptr(kn, NR_KERNFS_LOCK_BITS);
>> +
>> +	lockdep_assert_held(&kernfs_root(kn)->kernfs_rwsem[idx]);
> 
> Please encapsulate this into a function. If possible, it'd be ideal if the
> conversion can be done in steps - e.g. first introduce lock encapsulation
> interface while leaving locking itself alone and then switch the actual
> locking.

Sure. I have divided interface and usage part into separate patches in
v6 of the patch at [1].

> 
>> -static void kernfs_drain(struct kernfs_node *kn)
>> -	__releases(&kernfs_root(kn)->kernfs_rwsem)
[...]
>> +	lockdep_assert_held_write(&root->kernfs_rwsem[a_idx]);
>> +	WARN_ON_ONCE(atomic_read(&kn->active) >= 0);
> 
> Ditto, I'd much prefer to see the lock lookup and accompanying operations to
> be encapsulated somehow.

Done.
> 
>> @@ -1588,37 +1597,65 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
>>  		     const char *new_name, const void *new_ns)
>>  {
>>  	struct kernfs_node *old_parent;
>> -	struct kernfs_root *root;
>>  	const char *old_name = NULL;
>> -	int error;
>> +	int error, idx, np_idx, p_idx;
>>  
>>  	/* can't move or rename root */
>>  	if (!kn->parent)
>>  		return -EINVAL;
>>  
>> -	root = kernfs_root(kn);
>> -	down_write(&root->kernfs_rwsem);
>> +	/*
>> +	 * Take lock of node's old (current) parent.
>> +	 * If new parent has a different lock, then take that
>> +	 * lock as well.
>> +	 */
>> +	idx = hash_ptr(kn, NR_KERNFS_LOCK_BITS);
>> +	p_idx = hash_ptr(kn->parent, NR_KERNFS_LOCK_BITS);
>> +	np_idx = hash_ptr(new_parent, NR_KERNFS_LOCK_BITS);
>> +
>> +	/*
>> +	 * Take only kn's lock. The subsequent kernfs_put
>> +	 * may free up old_parent so if old_parent has a
>> +	 * different lock, we will explicitly release that.
>> +	 */
>> +	down_write_kernfs_rwsem(kn, LOCK_SELF, 0);
>> +
>> +	if (idx != np_idx) /* new parent hashes to different lock */
>> +		down_write_kernfs_rwsem(new_parent, LOCK_SELF, 1);
>> +
>> +	/* old_parent hashes to a different lock */
>> +	if (idx != p_idx && p_idx != np_idx)
>> +		down_write_kernfs_rwsem(kn->parent, LOCK_SELF, 2);
> 
> Can't this lead to ABBA deadlock? When double locking, the locking order
> should always be consistent. If we were doing per-kernfs_node lock, child ->
> parent ordering works but we're hashing locks, so that doesn't work anymore
> - one child-parent combo can lock A then B while the other child-parent
> combo hash the other way around and lock B then A. The only order we can
> define is in terms of the locks themselves - e.g. if the address (or index)
> of lock A < lock B, then we lock A first whether that maps to the child or
> parent.
> 

Indeed. In v6 of this change I am no longer using parent<->child
relations locking order. As suggested new interface uses lock addresses
to determine locking order for nested locking cases.

> Also, please encapsulate double locking in a set of functions. We really
> don't wanna see all the details in the users.
> 

Sure. This has been taken care in v6 of the patch at [1].
>> --- a/fs/kernfs/kernfs-internal.h
>> +++ b/fs/kernfs/kernfs-internal.h
>> @@ -19,6 +19,9 @@
>>  #include <linux/kernfs.h>
>>  #include <linux/fs_context.h>
>>  
>> +#define LOCK_SELF 0
>> +#define LOCK_SELF_AND_PARENT 1
> 
> I get that this is private header but can you please add some identifying
> prefix and make them enums? That makes it way easier for debuggers, bpf and
> tracing.
> 

Understood. These have been replaced with a properly prefixed enum.
>> +/*
>> + * If both node and it's parent need locking,
>> + * lock child first so that kernfs_rename_ns
>> + * does not change the parent, leaving us
>> + * with old parent here.
>> + */
> 
> Please reflow it close to 80 chars and ditto with above. We can't follow
> child -> parent order with hashed locks.

Done as mentioned earlier.

Thanks,
-- Imran

[1]:
https://lore.kernel.org/lkml/20220214120322.2402628-1-imran.f.khan@oracle.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ