[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89i+1uMAL_025rNc3C1Ut-E5S8Nat6KhKEzcFeC1xxcFWaA@mail.gmail.com>
Date: Tue, 20 Feb 2024 09:17:30 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: Shijie Huang <shijie@...eremail.onmicrosoft.com>
Cc: Huang Shijie <shijie@...amperecomputing.com>, kuba@...nel.org,
patches@...erecomputing.com, davem@...emloft.net, horms@...nel.org,
ast@...nel.org, dhowells@...hat.com, linyunsheng@...wei.com,
aleksander.lobakin@...el.com, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, cl@...amperecomputing.com
Subject: Re: [PATCH] net: skbuff: allocate the fclone in the current NUMA node
On Tue, Feb 20, 2024 at 7:26 AM Shijie Huang
<shijie@...eremail.onmicrosoft.com> wrote:
>
>
> 在 2024/2/20 13:32, Eric Dumazet 写道:
> > On Tue, Feb 20, 2024 at 3:18 AM Huang Shijie
> > <shijie@...amperecomputing.com> wrote:
> >> The current code passes NUMA_NO_NODE to __alloc_skb(), we found
> >> it may creates fclone SKB in remote NUMA node.
> > This is intended (WAI)
>
> Okay. thanks a lot.
>
> It seems I should fix the issue in other code, not the networking.
>
> >
> > What about the NUMA policies of the current thread ?
>
> We use "numactl -m 0" for memcached, the NUMA policy should allocate
> fclone in
>
> node 0, but we can see many fclones were allocated in node 1.
>
> We have enough memory to allocate these fclones in node 0.
>
> >
> > Has NUMA_NO_NODE behavior changed recently?
> I guess not.
> >
> > What means : "it may creates" ? Please be more specific.
>
> When we use the memcached for testing in NUMA, there are maybe 20% ~ 30%
> fclones were allocated in
>
> remote NUMA node.
Interesting, how was it measured exactly ?
Are you using SLUB or SLAB ?
>
> After this patch, all the fclones are allocated correctly.
Note that skbs for TCP have three memory components (or more for large packets)
sk_buff
skb->head
page frags (see sk_page_frag_refill() for non zero copy payload)
The payload should be following NUMA policy of current thread, that is
really what matters.
Powered by blists - more mailing lists