lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240405164920.2844-1-mcassell411@gmail.com>
Date: Fri,  5 Apr 2024 16:49:20 +0000
From: Matthew Cassell <mcassell411@...il.com>
To: corbet@....net,
	akpm@...ux-foundation.org,
	vbendel@...hat.com,
	rppt@...nel.org
Cc: linux-mm@...ck.org,
	linux-doc@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	mcassell411@...il.com
Subject: [PATCH] Documentation/admin-guide/sysctl/vm.rst adding the importance of NUMA-node count to documentation

If any bits are set in node_reclaim_mode (tunable via
/proc/sys/vm/zone_reclaim_mode) within get_pages_from_freelist(), then
page allocations start getting early access to reclaim via the
node_reclaim() code path when memory pressure increases. This behavior
provides the most optimization for multiple NUMA node machines. The above
is mentioned in:

Commit 9eeff2395e3cfd05c9b2e6 ("[PATCH] Zone reclaim: Reclaim logic")
states "Zone reclaim is of particular importance for NUMA machines. It
can be more beneficial to reclaim a page than taking the performance
penalties that come with allocating a page on a REMOTE zone."

While the pros/cons of staying on node versus allocating remotely are
mentioned in commit histories and mailing lists. It isn't specifically
mentioned in Documentation/ and isn't possible with a lone node. Imagine a
situation where CONFIG_NUMA=y (the default on most major distributions)
and only a single NUMA node exists. The latter is an oxymoron
(single-node == uniform memory access). Informing the user via vm.rst that
the most bang for their buck is when multiple nodes exist seems helpful.

Signed-off-by: Matthew Cassell <mcassell411@...il.com>
---
 Documentation/admin-guide/sysctl/vm.rst | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index c59889de122b..10270548af2a 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -1031,7 +1031,8 @@ Consider enabling one or more zone_reclaim mode bits if it's known that the
 workload is partitioned such that each partition fits within a NUMA node
 and that accessing remote memory would cause a measurable performance
 reduction.  The page allocator will take additional actions before
-allocating off node pages.
+allocating off node pages. Keep in mind enabling bits in zone_reclaim_mode
+makes the most sense for topologies consisting of multiple NUMA nodes.
 
 Allowing zone reclaim to write out pages stops processes that are
 writing large amounts of data from dirtying pages on other nodes. Zone
-- 
2.34.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ