[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iKFPjSQhXRcyb+EDQiH0xJG1WdWVGXXLK6iOcMpM2zKyQ@mail.gmail.com>
Date: Wed, 28 Feb 2024 10:32:47 +0100
From: Eric Dumazet <edumazet@...gle.com>
To: "Lameter, Christopher" <cl@...amperecomputing.com>
Cc: Adam Li <adamli@...amperecomputing.com>, corbet@....net, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com, willemb@...gle.com,
yangtiezhu@...ngson.cn, atenart@...nel.org, kuniyu@...zon.com,
wuyun.abel@...edance.com, leitao@...ian.org, alexander@...alicyn.com,
dhowells@...hat.com, paulmck@...nel.org, joel.granados@...il.com,
urezki@...il.com, joel@...lfernandes.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
patches@...erecomputing.com, shijie@...amperecomputing.com
Subject: Re: [PATCH] net: make SK_MEMORY_PCPU_RESERV tunable
On Wed, Feb 28, 2024 at 12:08 AM Lameter, Christopher
<cl@...amperecomputing.com> wrote:
>
> On Tue, 27 Feb 2024, Eric Dumazet wrote:
>
> >> sk_prot->memory_allocated points to global atomic variable:
> >> atomic_long_t tcp_memory_allocated ____cacheline_aligned_in_smp;
> >>
> >> If increasing the per-cpu cache size from 1MB to e.g. 16MB,
> >> changes to sk->sk_prot->memory_allocated can be further reduced.
> >> Performance may be improved on system with many cores.
> >
> > This looks good, do you have any performance numbers to share ?
> >
> > On a host with 384 threads, 384*16 -> 6 GB of memory.
>
> Those things also come with corresponding memories of a couple of TB...
>
> > With this kind of use, we might need a shrinker...
>
> Yes. No point of keeping the buffers around if the core stops doing
> networking. But to be done at times when there is no contention please.
I yet have to see the 'contention' ?
I usually see one on the zone spinlock or memcg ones when
allocating/freeing pages, not on the tcp_memory_allocated atomic
We can add caches for sure, we had a giant one before my patch, and
this was a disaster really,
for workloads with millions of TCP sockets.
Powered by blists - more mailing lists