[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210301161236.194203607@linuxfoundation.org>
Date: Mon, 1 Mar 2021 17:14:33 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Dave Hansen <dave.hansen@...ux.intel.com>,
Ben Widawsky <ben.widawsky@...el.com>,
Oscar Salvador <osalvador@...e.de>,
David Rientjes <rientjes@...gle.com>,
Christoph Lameter <cl@...ux.com>,
Alex Shi <alex.shi@...ux.alibaba.com>,
Daniel Wagner <dwagner@...e.de>,
"Tobin C. Harding" <tobin@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Huang Ying <ying.huang@...el.com>,
Dan Williams <dan.j.williams@...el.com>, Qian Cai <cai@....pw>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 5.11 705/775] mm/vmscan: restore zone_reclaim_mode ABI
From: Dave Hansen <dave.hansen@...ux.intel.com>
commit 519983645a9f2ec339cabfa0c6ef7b09be985dd0 upstream.
I went to go add a new RECLAIM_* mode for the zone_reclaim_mode sysctl.
Like a good kernel developer, I also went to go update the
documentation. I noticed that the bits in the documentation didn't
match the bits in the #defines.
The VM never explicitly checks the RECLAIM_ZONE bit. The bit is,
however implicitly checked when checking 'node_reclaim_mode==0'. The
RECLAIM_ZONE #define was removed in a cleanup. That, by itself is fine.
But, when the bit was removed (bit 0) the _other_ bit locations also got
changed. That's not OK because the bit values are documented to mean
one specific thing. Users surely do not expect the meaning to change
from kernel to kernel.
The end result is that if someone had a script that did:
sysctl vm.zone_reclaim_mode=1
it would have gone from enabling node reclaim for clean unmapped pages
to writing out pages during node reclaim after the commit in question.
That's not great.
Put the bits back the way they were and add a comment so something like
this is a bit harder to do again. Update the documentation to make it
clear that the first bit is ignored.
Link: https://lkml.kernel.org/r/20210219172555.FF0CDF23@viggo.jf.intel.com
Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
Fixes: 648b5cf368e0 ("mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE")
Reviewed-by: Ben Widawsky <ben.widawsky@...el.com>
Reviewed-by: Oscar Salvador <osalvador@...e.de>
Acked-by: David Rientjes <rientjes@...gle.com>
Acked-by: Christoph Lameter <cl@...ux.com>
Cc: Alex Shi <alex.shi@...ux.alibaba.com>
Cc: Daniel Wagner <dwagner@...e.de>
Cc: "Tobin C. Harding" <tobin@...nel.org>
Cc: Christoph Lameter <cl@...ux.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Huang Ying <ying.huang@...el.com>
Cc: Dan Williams <dan.j.williams@...el.com>
Cc: Qian Cai <cai@....pw>
Cc: <stable@...r.kernel.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
Documentation/admin-guide/sysctl/vm.rst | 10 +++++-----
mm/vmscan.c | 9 +++++++--
2 files changed, 12 insertions(+), 7 deletions(-)
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -983,11 +983,11 @@ that benefit from having their data cach
left disabled as the caching effect is likely to be more important than
data locality.
-zone_reclaim may be enabled if it's known that the workload is partitioned
-such that each partition fits within a NUMA node and that accessing remote
-memory would cause a measurable performance reduction. The page allocator
-will then reclaim easily reusable pages (those page cache pages that are
-currently not used) before allocating off node pages.
+Consider enabling one or more zone_reclaim mode bits if it's known that the
+workload is partitioned such that each partition fits within a NUMA node
+and that accessing remote memory would cause a measurable performance
+reduction. The page allocator will take additional actions before
+allocating off node pages.
Allowing zone reclaim to write out pages stops processes that are
writing large amounts of data from dirtying pages on other nodes. Zone
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4095,8 +4095,13 @@ module_init(kswapd_init)
*/
int node_reclaim_mode __read_mostly;
-#define RECLAIM_WRITE (1<<0) /* Writeout pages during reclaim */
-#define RECLAIM_UNMAP (1<<1) /* Unmap pages during reclaim */
+/*
+ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
+ * ABI. New bits are OK, but existing bits can never change.
+ */
+#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */
+#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */
+#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */
/*
* Priority for NODE_RECLAIM. This determines the fraction of pages
Powered by blists - more mailing lists