[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240607071228.GA76933@system.software.com>
Date: Fri, 7 Jun 2024 16:12:28 +0900
From: Byungchul Park <byungchul@...com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, kernel_team@...ynix.com, hannes@...xchg.org,
iamjoonsoo.kim@....com, rientjes@...gle.com
Subject: Re: [PATCH v2] mm: let kswapd work again for node that used to be
hopeless but may not now
On Wed, Jun 05, 2024 at 11:19:02AM +0900, Byungchul Park wrote:
> On Wed, Jun 05, 2024 at 10:02:07AM +0800, Huang, Ying wrote:
> > Byungchul Park <byungchul@...com> writes:
> >
> > > On Tue, Jun 04, 2024 at 04:57:17PM +0800, Huang, Ying wrote:
> > >> Byungchul Park <byungchul@...com> writes:
> > >>
> > >> > On Tue, Jun 04, 2024 at 03:57:54PM +0800, Huang, Ying wrote:
> > >> >> Byungchul Park <byungchul@...com> writes:
> > >> >>
> > >> >> > Changes from v1:
> > >> >> > 1. Don't allow to resume kswapd if the system is under memory
> > >> >> > pressure that might affect direct reclaim by any chance, like
> > >> >> > if NR_FREE_PAGES is less than (low wmark + min wmark)/2.
> > >> >> >
> > >> >> > --->8---
> > >> >> > From 6c73fc16b75907f5da9e6b33aff86bf7d7c9dd64 Mon Sep 17 00:00:00 2001
> > >> >> > From: Byungchul Park <byungchul@...com>
> > >> >> > Date: Tue, 4 Jun 2024 15:27:56 +0900
> > >> >> > Subject: [PATCH v2] mm: let kswapd work again for node that used to be hopeless but may not now
> > >> >> >
> > >> >> > A system should run with kswapd running in background when under memory
> > >> >> > pressure, such as when the available memory level is below the low water
> > >> >> > mark and there are reclaimable folios.
> > >> >> >
> > >> >> > However, the current code let the system run with kswapd stopped if
> > >> >> > kswapd has been stopped due to more than MAX_RECLAIM_RETRIES failures
> > >> >> > until direct reclaim will do for that, even if there are reclaimable
> > >> >> > folios that can be reclaimed by kswapd. This case was observed in the
> > >> >> > following scenario:
> > >> >> >
> > >> >> > CONFIG_NUMA_BALANCING enabled
> > >> >> > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > >> >> > numa node0 (500GB local DRAM, 128 CPUs)
> > >> >> > numa node1 (100GB CXL memory, no CPUs)
> > >> >> > swap off
> > >> >> >
> > >> >> > 1) Run a workload with big anon pages e.g. mmap(200GB).
> > >> >> > 2) Continue adding the same workload to the system.
> > >> >> > 3) The anon pages are placed in node0 by promotion/demotion.
> > >> >> > 4) kswapd0 stops because of the unreclaimable anon pages in node0.
> > >> >> > 5) Kill the memory hoggers to restore the system.
> > >> >> >
> > >> >> > After restoring the system at 5), the system starts to run without
> > >> >> > kswapd. Even worse, tiering mechanism is no longer able to work since
> > >> >> > the mechanism relies on kswapd for demotion.
> > >> >>
> > >> >> We have run into the situation that kswapd is kept in failure state for
> > >> >> long in a multiple tiers system. I think that your solution is too
> > >> >
> > >> > My solution just gives a chance for kswapd to work again even if
> > >> > kswapd_failures >= MAX_RECLAIM_RETRIES, if there are potential
> > >> > reclaimable folios. That's it.
> > >> >
> > >> >> limited, because OOM killing may not happen, while the access pattern of
> > >> >
> > >> > I don't get this. OOM will happen as is, through direct reclaim.
> > >>
> > >> A system that fails to reclaim via kswapd may succeed to reclaim via
> > >> direct reclaim, because more CPUs are used to scanning the page tables.
> > >
> > > Honestly, I don't think so with this description.
> > >
> > > The fact that the system hit MAX_RECLAIM_RETRIES means the system is
> > > currently hopeless unless reclaiming folios in a stronger way by *direct
> > > reclaim*. The solution for this situation should not be about letting
> > > more CPUs particiated in reclaiming, again, *at least in this situation*.
> > >
> > > What you described here is true only in a normal state where the more
> > > CPUs work on reclaiming, the more reclaimable folios can be reclaimed.
> > > kswapd can be a helper *only* when there are kswapd-reclaimable folios.
> >
> > Sometimes, we cannot reclaim just because we doesn't scan fast enough so
> > the Accessed-bit is set again during scanning. With more CPUs, we can
> > scan faster, so make some progress. But, yes, this only cover one
> > situation, there are other situations too.
>
> What I mean is *the issue we try to solve* is not the situation that
> can be solved by letting more CPUs participate in reclaiming.
Again, in the situation where kswapd has failed more than
MAX_RECLAIM_RETRIES, say, holeless, I don't think it makes sense to wake
up kswapd every 10 seconds. It'd be more sensible to wake up kwapd only
if there are *at least potentially* reclaimable folios.
As Ying said, there's no way to precisely track if reclaimable, but it's
worth trying when the possibility becomes positive and looks more
reasonable. Thoughts?
Byungchul
> Byungchul
>
> > --
> > Best Regards,
> > Huang, Ying
> >
> > > Byungchul
> > >
> > >> In a system with NUMA balancing based page promotion and page demotion
> > >> enabled, page promotion will wake up kswapd, but kswapd may fail in some
> > >> situations. But page promotion will no trigger direct reclaim or OOM.
> > >>
> > >> >> the workloads may change. We have a preliminary and simple solution for
> > >> >> this as follows,
> > >> >>
> > >> >> https://git.kernel.org/pub/scm/linux/kernel/git/vishal/tiering.git/commit/?h=tiering-0.8&id=17a24a354e12d4d4675d78481b358f668d5a6866
> > >> >
> > >> > Whether tiering is involved or not, the same problem can arise if
> > >> > kswapd gets stopped due to kswapd_failures >= MAX_RECLAIM_RETRIES.
> > >>
> > >> Your description is about tiering too. Can you describe a situation
> > >> without tiering?
> > >>
> > >> --
> > >> Best Regards,
> > >> Huang, Ying
> > >>
> > >> > Byungchul
> > >> >
> > >> >> where we will try to wake up kswapd to check every 10 seconds if kswapd
> > >> >> is in failure state. This is another possible solution.
> > >> >>
> > >> >> > However, the node0 has pages newly allocated after 5), that might or
> > >> >> > might not be reclaimable. Since those are potentially reclaimable, it's
> > >> >> > worth hopefully trying reclaim by allowing kswapd to work again.
> > >> >> >
> > >> >>
> > >> >> [snip]
> > >> >>
> > >> >> --
> > >> >> Best Regards,
> > >> >> Huang, Ying
Powered by blists - more mailing lists