[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9QVWwAreTlDVdZ0@P9FQF9L96D.corp.robot.car>
Date: Fri, 27 Jan 2023 10:18:03 -0800
From: Roman Gushchin <roman.gushchin@...ux.dev>
To: Michal Hocko <mhocko@...e.com>
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Leonardo BrĂ¡s <leobras@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
cgroups@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Frederic Weisbecker <fweisbecker@...e.de>
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
On Fri, Jan 27, 2023 at 02:58:19PM +0100, Michal Hocko wrote:
> On Fri 27-01-23 08:11:04, Michal Hocko wrote:
> > [Cc Frederic]
> >
> > On Thu 26-01-23 15:12:35, Roman Gushchin wrote:
> > > On Thu, Jan 26, 2023 at 08:41:34AM +0100, Michal Hocko wrote:
> > [...]
> > > > > Essentially each cpu will try to grab the remains of the memory quota
> > > > > and move it locally. I wonder in such circumstances if we need to disable the pcp-caching
> > > > > on per-cgroup basis.
> > > >
> > > > I think it would be more than sufficient to disable pcp charging on an
> > > > isolated cpu.
> > >
> > > It might have significant performance consequences.
> >
> > Is it really significant?
> >
> > > I'd rather opt out of stock draining for isolated cpus: it might slightly reduce
> > > the accuracy of memory limits and slightly increase the memory footprint (all
> > > those dying memcgs...), but the impact will be limited. Actually it is limited
> > > by the number of cpus.
> >
> > Hmm, OK, I have misunderstood your proposal. Yes, the overal pcp charges
> > potentially left behind should be small and that shouldn't really be a
> > concern for memcg oom situations (unless the limit is very small and
> > workloads on isolated cpus using small hard limits is way beyond my
> > imagination).
> >
> > My first thought was that those charges could be left behind without any
> > upper bound but in reality sooner or later something should be running
> > on those cpus and if the memcg is gone the pcp cache would get refilled
> > and old charges gone.
> >
> > So yes, this is actually a better and even simpler solution. All we need
> > is something like this
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index ab457f0394ab..13b84bbd70ba 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2344,6 +2344,9 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> > struct mem_cgroup *memcg;
> > bool flush = false;
> >
> > + if (cpu_is_isolated(cpu))
> > + continue;
> > +
> > rcu_read_lock();
> > memcg = stock->cached;
> > if (memcg && stock->nr_pages &&
>
> Btw. this would be over pessimistic. The following should make more
> sense:
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index ab457f0394ab..55e440e54504 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2357,7 +2357,7 @@ static void drain_all_stock(struct mem_cgroup *root_memcg)
> !test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
> if (cpu == curcpu)
> drain_local_stock(&stock->work);
> - else
> + else if (!cpu_is_isolated(cpu))
> schedule_work_on(cpu, &stock->work);
> }
> }
Yes, this is exactly what I was thinking of. It should solve the problem
for isolated cpus well enough without introducing an overhead for everybody else.
If you'll make a proper patch, please add my
Acked-by: Roman Gushchin <roman.gushchin@...ux.dev>
I understand the concerns regarding spurious OOMs on 256-cores machine,
but I guess they are somewhat theoretical and also possible with the current code
(e.g. one ooming cgroup can effectively block draining for everybody else).
Thanks!
Powered by blists - more mailing lists