[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YgKm5aSCcCYWkck2@slm.duckdns.org>
Date: Tue, 8 Feb 2022 07:22:45 -1000
From: Tejun Heo <tj@...nel.org>
To: Imran Khan <imran.f.khan@...cle.com>
Cc: gregkh@...uxfoundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 1/2] kernfs: use hashed mutex and spinlock in place of
global ones.
On Sun, Feb 06, 2022 at 12:09:24PM +1100, Imran Khan wrote:
> +/*
> + * NR_KERNFS_LOCK_BITS determines size (NR_KERNFS_LOCKS) of hash
> + * table of locks.
> + * Having a small hash table would impact scalability, since
> + * more and more kernfs_node objects will end up using same lock
> + * and having a very large hash table would waste memory.
> + *
> + * At the moment size of hash table of locks is being set based on
> + * the number of CPUs as follows:
> + *
> + * NR_CPU NR_KERNFS_LOCK_BITS NR_KERNFS_LOCKS
> + * 1 1 2
> + * 2-3 2 4
> + * 4-7 4 16
> + * 8-15 6 64
> + * 16-31 8 256
> + * 32 and more 10 1024
> + */
> +#ifdef CONFIG_SMP
> +#define NR_KERNFS_LOCK_BITS (2 * (ilog2(NR_CPUS < 32 ? NR_CPUS : 32)))
> +#else
> +#define NR_KERNFS_LOCK_BITS 1
> +#endif
> +
> +#define NR_KERNFS_LOCKS (1 << NR_KERNFS_LOCK_BITS)
I have a couple questions:
* How did you come up with the above numbers? Are they based on some
experimentation? It'd be nice if the comment explains why the numbers are
like that.
* IIRC, we split these locks to per kernfs instance recently as a way to
mitigate lock contention occurring across kernfs instances. I don't think
it's beneficial to keep these hashed locks separate. It'd be both simpler
and less contended to double one shared hashtable than splitting the table
into two separate half sized ones. So, maybe switch to global hashtables
and increase the size?
Thanks.
--
tejun
Powered by blists - more mailing lists