[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZdNFbiH1ufbOTIDx@tiehlicka>
Date: Mon, 19 Feb 2024 13:11:26 +0100
From: Michal Hocko <mhocko@...e.com>
To: "T.J. Mercier" <tjmercier@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Efly Young <yangyifei03@...ishou.com>, android-mm@...gle.com,
yuzhao@...gle.com, mkoutny@...e.com,
Yosry Ahmed <yosryahmed@...gle.com>, cgroups@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] mm: memcg: Use larger batches for proactive reclaim
On Tue 06-02-24 09:58:41, Michal Hocko wrote:
> On Mon 05-02-24 20:01:40, T.J. Mercier wrote:
> > On Mon, Feb 5, 2024 at 1:16 PM Michal Hocko <mhocko@...e.com> wrote:
> > >
> > > On Mon 05-02-24 12:47:47, T.J. Mercier wrote:
> > > > On Mon, Feb 5, 2024 at 12:36 PM Michal Hocko <mhocko@...e.com> wrote:
> > > [...]
> > > > > This of something like
> > > > > timeout $TIMEOUT echo $TARGET > $MEMCG_PATH/memory.reclaim
> > > > > where timeout acts as a stop gap if the reclaim cannot finish in
> > > > > TIMEOUT.
> > > >
> > > > Yeah I get the desired behavior, but using sc->nr_reclaimed to achieve
> > > > it is what's bothering me.
> > >
> > > I am not really happy about this subtlety. If we have a better way then
> > > let's do it. Better in its own patch, though.
> > >
> > > > It's already wired up that way though, so if you want to make this
> > > > change now then I can try to test for the difference using really
> > > > large reclaim targets.
> > >
> > > Yes, please. If you want it a separate patch then no objection from me
> > > of course. If you do no like the nr_to_reclaim bailout then maybe we can
> > > go with a simple break out flag in scan_control.
> > >
> > > Thanks!
> >
> > It's a bit difficult to test under the too_many_isolated check, so I
> > moved the fatal_signal_pending check outside and tried with that.
> > Performing full reclaim on the /uid_0 cgroup with a 250ms delay before
> > SIGKILL, I got an average of 16ms better latency with
> > sc->nr_to_reclaim across 20 runs ignoring one 1s outlier with
> > SWAP_CLUSTER_MAX.
>
> This will obviously scale with the number of memcgs in the hierarchy but
> you are right that too_many_isolated makes the whole fatal_signal_pending
> check rather inefficient. I haven't missed that. The reclaim path is
> rather convoluted so this will likely be more complex than I
> anticipated. I will think about that some more.
>
> In order to not delay your patch, please repost with suggested updates
> to the changelog. This needs addressing IMO but I do not think this is
> critical at this stage.
Has there been a new version or a proposal to refine the changelog
posted?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists