lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1483740889.9712.44.camel@edumazet-glaptop3.roam.corp.google.com>
Date:   Fri, 06 Jan 2017 14:14:49 -0800
From:   Eric Dumazet <eric.dumazet@...il.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Tom Herbert <tom@...bertland.com>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>, edumazet@...gle.com
Subject: Re: weird allocation pattern in alloc_ila_locks

On Fri, 2017-01-06 at 13:16 +0100, Michal Hocko wrote:
> I was thinking about the rhashtable which was the source of the c&p and
> it can be simplified as well.
> ---
> From 555543604f5f020284ea85d928d52f6a55fde7ca Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@...e.com>
> Date: Fri, 6 Jan 2017 13:12:31 +0100
> Subject: [PATCH] rhashtable: simplify a strange allocation pattern
> 
> alloc_bucket_locks allocation pattern is quite unusual. We are
> preferring vmalloc when CONFIG_NUMA is enabled which doesn't make much
> sense because there is no special NUMA locality handled in that code
> path. Let's just simplify the code and use kvmalloc helper, which is a
> transparent way to use kmalloc with vmalloc fallback, if the caller
> is allowed to block and use the flag otherwise.
> 
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> ---
>  lib/rhashtable.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> index 32d0ad058380..4d3886b6ab7d 100644
> --- a/lib/rhashtable.c
> +++ b/lib/rhashtable.c
> @@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
>  	size = min_t(unsigned int, size, tbl->size >> 1);
>  
>  	if (sizeof(spinlock_t) != 0) {
> -		tbl->locks = NULL;
> -#ifdef CONFIG_NUMA
> -		if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> -		    gfp == GFP_KERNEL)
> -			tbl->locks = vmalloc(size * sizeof(spinlock_t));
> -#endif
> -		if (gfp != GFP_KERNEL)
> -			gfp |= __GFP_NOWARN | __GFP_NORETRY;
> -
> -		if (!tbl->locks)
> +		if (gfpflags_allow_blocking(gfp_))
> +			tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
> +		else
>  			tbl->locks = kmalloc_array(size, sizeof(spinlock_t),


I believe the intent was to get NUMA spreading, a bit like what we have
in alloc_large_system_hash() when hashdist == HASHDIST_DEFAULT

For hash tables that are not attached to a single NUMA node, this might
make sense.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ