[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y5mbHQSKuXY1Qojk@dhcp22.suse.cz>
Date: Wed, 14 Dec 2022 10:45:01 +0100
From: Michal Hocko <mhocko@...e.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: "Huang, Ying" <ying.huang@...el.com>,
Yang Shi <shy828301@...il.com>, Wei Xu <weixugc@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: memcg reclaim demotion wrt. isolation
On Tue 13-12-22 14:26:42, Dave Hansen wrote:
> On 12/13/22 07:41, Michal Hocko wrote:
> > This makes sense but I suspect that this wasn't intended also for
> > memcg triggered reclaim. This would mean that a memory pressure in one
> > hierarchy could trigger paging out pages of a different hierarchy if the
> > demotion target is close to full.
> >
> > I haven't really checked at the current kswapd wake up checks but I
> > suspect that kswapd would back off in most cases so this shouldn't
> > really cause any big problems. But I guess it would be better to simply
> > not wake kswapd up for the memcg reclaim. What do you think?
>
> You're right that this wasn't really considering memcg-based reclaim.
> The entire original idea was that demotion allocations should fail fast,
> but it would be nice if they could kick kswapd so they would
> *eventually* succeed and just just fail fast forever.
>
> Before we go trying to patch anything, I'd be really interested what it
> does in practice. How much does it actually wake up kswapd? Does
> kswapd cause any collateral damage?
I haven't seen any real problem so far. I was just trying to wrap my
head around consenquences of discussed memory.demote memcg interface
[1]. See my reply to Johannes about specific concerns.
[1] http://lkml.kernel.org/r/87k02volwe.fsf@yhuang6-desk2.ccr.corp.intel.com
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists