[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJoHDzfYfhcwVvR4m7DiVG-UfFNqm+D1WD-2wjOttk6ew@mail.gmail.com>
Date: Tue, 20 Feb 2024 06:32:15 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Huang Shijie <shijie@...amperecomputing.com>
Cc: kuba@...nel.org, patches@...erecomputing.com, davem@...emloft.net,
horms@...nel.org, ast@...nel.org, dhowells@...hat.com, linyunsheng@...wei.com,
aleksander.lobakin@...el.com, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, cl@...amperecomputing.com
Subject: Re: [PATCH] net: skbuff: allocate the fclone in the current NUMA node
On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie
<shijie@...amperecomputing.com> wrote:
>
> The current code passes NUMA_NO_NODE to __alloc_skb(), we found
> it may creates fclone SKB in remote NUMA node.
This is intended (WAI)
What about the NUMA policies of the current thread ?
Has NUMA_NO_NODE behavior changed recently?
What means : "it may creates" ? Please be more specific.
>
> So use numa_node_id() to limit the allocation to current NUMA node.
We prefer the allocation to succeed, instead of failing if the current
NUMA node has no available memory.
Please check:
grep . /sys/devices/system/node/node*/numastat
Are you going to change ~700 uses of NUMA_NO_NODE in the kernel ?
Just curious.
>
> Signed-off-by: Huang Shijie <shijie@...amperecomputing.com>
> ---
> include/linux/skbuff.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
> index 2dde34c29203..ebc42b2604ad 100644
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -1343,7 +1343,7 @@ static inline bool skb_fclone_busy(const struct sock *sk,
> static inline struct sk_buff *alloc_skb_fclone(unsigned int size,
> gfp_t priority)
> {
> - return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, NUMA_NO_NODE);
> + return __alloc_skb(size, priority, SKB_ALLOC_FCLONE, numa_node_id());
> }
>
> struct sk_buff *skb_morph(struct sk_buff *dst, struct sk_buff *src);
> --
> 2.40.1
>
Powered by blists - more mailing lists