lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <907A5EDC-F9D7-4D27-BAC3-5EAAE151AA7B@epfl.ch>
Date: Sat, 22 Feb 2025 15:13:50 +0000
From: Georgiy Konstantinovich Lebedev <georgiy.lebedev@...l.ch>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-mm@...r.kernel.org" <linux-mm@...r.kernel.org>
Subject: Does NUMA_BALANCING_MEMORY_TIERING work with hugetlb pages?

Hello,

I am having trouble figuring out whether the NUMA_BALANCING_MEMORY_TIERING
feature of /proc/sys/kernel/numa_balancing works with hugetlb pages.

I could not find any information about hugetlb pages in the documentation
related to this feature.

I have tried searching through the kernel codebase, however I have only
found that hugetlb pages are filtered out in the `should_skip_vma` function in
`mm/vmscan.c` — tracing its usage, I could not understand whether it is used in
memory tiering.

I have tried running memory tiering stress test as follows, but I am not
seeing any promotion or demotions in /proc/vmstat:
```
void trigger_tpp(void *addr, size_t n_pages) {
	size_t offset = n_pages - n_pages * 1 / 10;
	for (size_t k = 0; k < 100000000; ++k) {
		for (size_t i = offset; i < n_pages; ++i) {
			char *page = (char *)addr + (i * HUGE_PAGE_SIZE);
		}
	}
}
```

The setup for the stress test is as follows:
1. I allocate hugetlb pages to use almost all the available DRAM memory,
`/sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages`
reports 124928 (244 GB) out of 251 GB avilable 
2. I allocate hugetlb pages for the worklaod on the CXL attached memory, 
`/sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages` 
reports 10240 (10 GB)
3. numactl -H reports 1611 MB free memory on node 1
4. I "eat" 242 GB of huge pages by running a background application that mmaps
and faults memory,
`/sys/devices/system/node/node1/hugepages/hugepages-2048kB/free_hugepages` 
reports 1021 (2 GB)
4. I run the memory tiering stress test with 20GB of huge pages (mmaped and 
faulted), numastat reports 2042 MB of huge pages on node 1, 18438 MB of huge 
pages on node 3.

Thank you in advance for your time!

Regards,
Georgiy Lebedev

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ