[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e62d9465-7056-4714-9a4e-4645a457774e@suse.cz>
Date: Fri, 5 Apr 2024 12:44:22 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Alexander Lobakin <aleksander.lobakin@...el.com>,
"David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Cc: Alexander Duyck <alexanderduyck@...com>,
Yunsheng Lin <linyunsheng@...wei.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Christoph Lameter <cl@...ux.com>, Andrew Morton <akpm@...ux-foundation.org>,
nex.sw.ncis.osdt.itp.upstreaming@...el.com, netdev@...r.kernel.org,
intel-wired-lan@...ts.osuosl.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Suren Baghdasaryan <surenb@...gle.com>,
Kent Overstreet <kent.overstreet@...ux.dev>
Subject: Re: [PATCH net-next v9 4/9] slab: introduce kvmalloc_array_node() and
kvcalloc_node()
On 4/4/24 5:43 PM, Alexander Lobakin wrote:
> Add NUMA-aware counterparts for kvmalloc_array() and kvcalloc() to be
> able to flexibly allocate arrays for a particular node.
> Rewrite kvmalloc_array() to kvmalloc_array_node(NUMA_NO_NODE) call.
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@...el.com>
Acked-by: Vlastimil Babka <vbabka@...e>
This will however cause some conflicts with alloc tagging series with mm
tree in next and the new wrappers will have to be adjusted.
> ---
> include/linux/slab.h | 17 +++++++++++++++--
> 1 file changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index e53cbfa18325..d1d1fa5e7983 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -774,14 +774,27 @@ static inline __alloc_size(1) void *kvzalloc(size_t size, gfp_t flags)
> return kvmalloc(size, flags | __GFP_ZERO);
> }
>
> -static inline __alloc_size(1, 2) void *kvmalloc_array(size_t n, size_t size, gfp_t flags)
> +static inline __alloc_size(1, 2) void *
> +kvmalloc_array_node(size_t n, size_t size, gfp_t flags, int node)
> {
> size_t bytes;
>
> if (unlikely(check_mul_overflow(n, size, &bytes)))
> return NULL;
>
> - return kvmalloc(bytes, flags);
> + return kvmalloc_node(bytes, flags, node);
> +}
> +
> +static inline __alloc_size(1, 2) void *
> +kvmalloc_array(size_t n, size_t size, gfp_t flags)
> +{
> + return kvmalloc_array_node(n, size, flags, NUMA_NO_NODE);
> +}
> +
> +static inline __alloc_size(1, 2) void *
> +kvcalloc_node(size_t n, size_t size, gfp_t flags, int node)
> +{
> + return kvmalloc_array_node(n, size, flags | __GFP_ZERO, node);
> }
>
> static inline __alloc_size(1, 2) void *kvcalloc(size_t n, size_t size, gfp_t flags)
Powered by blists - more mailing lists