[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aBHHt9_ruks4q4Ll@tiehlicka>
Date: Wed, 30 Apr 2025 08:48:23 +0200
From: Michal Hocko <mhocko@...e.com>
To: Shakeel Butt <shakeel.butt@...ux.dev>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Muchun Song <muchun.song@...ux.dev>,
Vlastimil Babka <vbabka@...e.cz>, Jakub Kicinski <kuba@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Soheil Hassas Yeganeh <soheil@...gle.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
Meta kernel team <kernel-team@...a.com>
Subject: Re: [PATCH] memcg: multi-memcg percpu charge cache
On Tue 29-04-25 11:43:29, Shakeel Butt wrote:
> On Tue, Apr 29, 2025 at 02:13:16PM +0200, Michal Hocko wrote:
> >
> > > Some of the design choices are:
> > >
> > > 1. Fit all caches memcgs in a single cacheline.
> >
> > Could you be more specific about the reasoning? I suspect it is for the
> > network receive path you are mentioning above, right?
> >
>
> Here I meant why I chose NR_MEMCG_STOCK to be 7. Basically the first
> cacheline of per-cpu stock has all the cached memcg, so checking if a
> given memcg is cached or not should be comparable cheap as single cached
> memcg. You suggested comment already mentioned this.
>
> However please note that we may find in future that 2 cachelines worth of
> cached memcgs are better for wider audience/workloads but for simplicity
> let's start with single cacheline worth of cached memcgs.
Right, and this is exactly why an extended reasoning is really due. We
have introduced magic constants in the past and then we were struggling
to decide whether this might regress something.
I can imagine that we could extend the caching to be adaptive in the
future and scale with the number of cgroups in some way.
>
> [...]
> >
> > Just a minor suggestion below. Other than that looks good to me (with
> > follow up fixes) in this thread.
> > Acked-by: Michal Hocko <mhocko@...e.com>
> > Thanks!
> >
>
> Thanks, I will send a diff for Andrew to squash it into original patch.
Thanks!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists