[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9e61ab53e1419a144f774b95230b789244895424.camel@redhat.com>
Date: Wed, 25 Jan 2023 08:06:46 -0300
From: Leonardo BrĂ¡s <leobras@...hat.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Marcelo Tosatti <mtosatti@...hat.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 0/5] Introduce memcg_stock_pcp remote draining
On Wed, 2023-01-25 at 09:33 +0100, Michal Hocko wrote:
> On Wed 25-01-23 04:34:57, Leonardo Bras wrote:
> > Disclaimer:
> > a - The cover letter got bigger than expected, so I had to split it in
> > sections to better organize myself. I am not very confortable with it.
> > b - Performance numbers below did not include patch 5/5 (Remove flags
> > from memcg_stock_pcp), which could further improve performance for
> > drain_all_stock(), but I could only notice the optimization at the
> > last minute.
> >
> >
> > 0 - Motivation:
> > On current codebase, when drain_all_stock() is ran, it will schedule a
> > drain_local_stock() for each cpu that has a percpu stock associated with a
> > descendant of a given root_memcg.
> >
> > This happens even on 'isolated cpus', a feature commonly used on workloads that
> > are sensitive to interruption and context switching such as vRAN and Industrial
> > Control Systems.
> >
> > Since this scheduling behavior is a problem to those workloads, the proposal is
> > to replace the current local_lock + schedule_work_on() solution with a per-cpu
> > spinlock.
>
> If IIRC we have also discussed that isolated CPUs can simply opt out
> from the pcp caching and therefore the problem would be avoided
> altogether without changes to the locking scheme. I do not see anything
> regarding that in this submission. Could you elaborate why you have
> abandoned this option?
Hello Michal,
I understand pcp caching is a nice to have.
So while I kept the idea of disabling pcp caching in mind as an option, I first
tried to understand what kind of impacts we would be seeing when trying to
change the locking scheme.
After I raised the data in the cover letter, I found that the performance impact
appears not be that big. So in order to try keeping the pcp cache on isolated
cpus active, I decided to focus effort on the locking scheme change.
I mean, my rationale is: if is there a non-expensive way of keeping the feature,
why should we abandon it?
Best regards,
Leo
Powered by blists - more mailing lists