lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D1CB077F0@AcuExch.aculab.com>
Date:	Mon, 23 Mar 2015 12:53:11 +0000
From:	David Laight <David.Laight@...LAB.COM>
To:	'Herbert Xu' <herbert@...dor.apana.org.au>,
	"David S. Miller" <davem@...emloft.net>,
	Thomas Graf <tgraf@...g.ch>,
	Eric Dumazet <eric.dumazet@...il.com>,
	Patrick McHardy <kaber@...sh.net>,
	Josh Triplett <josh@...htriplett.org>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [v1 PATCH 9/10] rhashtable: Allow GFP_ATOMIC bucket table
 allocation

From: Herbert Xu
> This patch adds the ability to allocate bucket table with GFP_ATOMIC
> instead of GFP_KERNEL.  This is needed when we perform an immediate
> rehash during insertion.
> 
> Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
> ---
> 
>  lib/rhashtable.c |   24 ++++++++++++++----------
>  1 file changed, 14 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> index c284099..59078ed 100644
> --- a/lib/rhashtable.c
> +++ b/lib/rhashtable.c
> @@ -58,7 +58,8 @@ EXPORT_SYMBOL_GPL(lockdep_rht_bucket_is_held);
>  #endif
> 
> 
> -static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl)
> +static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
> +			      gfp_t gfp)
>  {
>  	unsigned int i, size;
>  #if defined(CONFIG_PROVE_LOCKING)
> @@ -75,12 +76,13 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl)
> 
>  	if (sizeof(spinlock_t) != 0) {
>  #ifdef CONFIG_NUMA
> -		if (size * sizeof(spinlock_t) > PAGE_SIZE)
> +		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> +		    gfp == GFP_KERNEL)
>  			tbl->locks = vmalloc(size * sizeof(spinlock_t));
>  		else
>  #endif
>  		tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
> -					   GFP_KERNEL);
> +					   gfp);
>  		if (!tbl->locks)
>  			return -ENOMEM;
>  		for (i = 0; i < size; i++)
...

If the lock array can't be allocated, then it is probably best to use
a lock array that is half the size rather than failing the expand.

I'm not sure your current version would allow the old lock array be used.

Does linux have any architectures where someone has decided to make a
spinlock consume an entire cache line rather than just a single word?
If so then the lock array can be gigantic.

Given the lock is only used for insert and delete, I'm also not at
all clear why you allocate 128 locks per cpu for very large tables.
With the locks in their own array I don't think there can be 'false
sharing', the worst than can happen is two cpus spinning on locks
in the same cache line.

It isn't as though the locks are likely to be held for any length
of time either.

	David

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ