[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <42a48532-23f5-f965-1e14-aa4b292b13cd@suse.cz>
Date: Mon, 30 Jan 2017 15:04:43 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Michal Hocko <mhocko@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: David Rientjes <rientjes@...gle.com>, Mel Gorman <mgorman@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
Al Viro <viro@...iv.linux.org.uk>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>,
Tom Herbert <tom@...bertland.com>,
Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH 3/9] rhashtable: simplify a strange allocation pattern
On 01/30/2017 10:49 AM, Michal Hocko wrote:
> From: Michal Hocko <mhocko@...e.com>
>
> alloc_bucket_locks allocation pattern is quite unusual. We are
> preferring vmalloc when CONFIG_NUMA is enabled. The rationale is that
> vmalloc will respect the memory policy of the current process and so the
> backing memory will get distributed over multiple nodes if the requester
> is configured properly. At least that is the intention, in reality
> rhastable is shrunk and expanded from a kernel worker so no mempolicy
> can be assumed.
>
> Let's just simplify the code and use kvmalloc helper, which is a
> transparent way to use kmalloc with vmalloc fallback, if the caller
> is allowed to block and use the flag otherwise.
>
> Cc: Tom Herbert <tom@...bertland.com>
> Cc: Eric Dumazet <eric.dumazet@...il.com>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> ---
> lib/rhashtable.c | 13 +++----------
> 1 file changed, 3 insertions(+), 10 deletions(-)
>
> diff --git a/lib/rhashtable.c b/lib/rhashtable.c
> index 32d0ad058380..1a487ea70829 100644
> --- a/lib/rhashtable.c
> +++ b/lib/rhashtable.c
> @@ -77,16 +77,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
> size = min_t(unsigned int, size, tbl->size >> 1);
>
> if (sizeof(spinlock_t) != 0) {
> - tbl->locks = NULL;
> -#ifdef CONFIG_NUMA
> - if (size * sizeof(spinlock_t) > PAGE_SIZE &&
> - gfp == GFP_KERNEL)
> - tbl->locks = vmalloc(size * sizeof(spinlock_t));
> -#endif
> - if (gfp != GFP_KERNEL)
> - gfp |= __GFP_NOWARN | __GFP_NORETRY;
> -
> - if (!tbl->locks)
> + if (gfpflags_allow_blocking(gfp))
> + tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
> + else
> tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
> gfp);
> if (!tbl->locks)
>
Powered by blists - more mailing lists