lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240604084533.GA68919@system.software.com>
Date: Tue, 4 Jun 2024 17:45:33 +0900
From: Byungchul Park <byungchul@...com>
To: "Huang, Ying" <ying.huang@...el.com>
Cc: akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org, kernel_team@...ynix.com, hannes@...xchg.org,
	iamjoonsoo.kim@....com, rientjes@...gle.com
Subject: Re: [PATCH v2] mm: let kswapd work again for node that used to be
 hopeless but may not now

On Tue, Jun 04, 2024 at 03:57:54PM +0800, Huang, Ying wrote:
> Byungchul Park <byungchul@...com> writes:
> 
> > Changes from v1:
> > 	1. Don't allow to resume kswapd if the system is under memory
> > 	   pressure that might affect direct reclaim by any chance, like
> > 	   if NR_FREE_PAGES is less than (low wmark + min wmark)/2.
> >
> > --->8---
> > From 6c73fc16b75907f5da9e6b33aff86bf7d7c9dd64 Mon Sep 17 00:00:00 2001
> > From: Byungchul Park <byungchul@...com>
> > Date: Tue, 4 Jun 2024 15:27:56 +0900
> > Subject: [PATCH v2] mm: let kswapd work again for node that used to be hopeless but may not now
> >
> > A system should run with kswapd running in background when under memory
> > pressure, such as when the available memory level is below the low water
> > mark and there are reclaimable folios.
> >
> > However, the current code let the system run with kswapd stopped if
> > kswapd has been stopped due to more than MAX_RECLAIM_RETRIES failures
> > until direct reclaim will do for that, even if there are reclaimable
> > folios that can be reclaimed by kswapd.  This case was observed in the
> > following scenario:
> >
> >    CONFIG_NUMA_BALANCING enabled
> >    sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> >    numa node0 (500GB local DRAM, 128 CPUs)
> >    numa node1 (100GB CXL memory, no CPUs)
> >    swap off
> >
> >    1) Run a workload with big anon pages e.g. mmap(200GB).
> >    2) Continue adding the same workload to the system.
> >    3) The anon pages are placed in node0 by promotion/demotion.
> >    4) kswapd0 stops because of the unreclaimable anon pages in node0.
> >    5) Kill the memory hoggers to restore the system.
> >
> > After restoring the system at 5), the system starts to run without
> > kswapd.  Even worse, tiering mechanism is no longer able to work since
> > the mechanism relies on kswapd for demotion.
> 
> We have run into the situation that kswapd is kept in failure state for
> long in a multiple tiers system.  I think that your solution is too

My solution just gives a chance for kswapd to work again even if
kswapd_failures >= MAX_RECLAIM_RETRIES, if there are potential
reclaimable folios.  That's it.

> limited, because OOM killing may not happen, while the access pattern of

I don't get this.  OOM will happen as is, through direct reclaim.

> the workloads may change.  We have a preliminary and simple solution for
> this as follows,
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/vishal/tiering.git/commit/?h=tiering-0.8&id=17a24a354e12d4d4675d78481b358f668d5a6866

Whether tiering is involved or not, the same problem can arise if
kswapd gets stopped due to kswapd_failures >= MAX_RECLAIM_RETRIES.

	Byungchul

> where we will try to wake up kswapd to check every 10 seconds if kswapd
> is in failure state.  This is another possible solution.
> 
> > However, the node0 has pages newly allocated after 5), that might or
> > might not be reclaimable.  Since those are potentially reclaimable, it's
> > worth hopefully trying reclaim by allowing kswapd to work again.
> >
> 
> [snip]
> 
> --
> Best Regards,
> Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ