lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e706e2fc-201f-4e45-8dd9-e2e17c269466@gmail.com>
Date: Thu, 8 May 2025 17:35:10 +0100
From: Usama Arif <usamaarif642@...il.com>
To: David Hildenbrand <david@...hat.com>,
 Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc: hannes@...xchg.org, shakeel.butt@...ux.dev, riel@...riel.com,
 ziy@...dia.com, baolin.wang@...ux.alibaba.com, lorenzo.stoakes@...cle.com,
 Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
 linux-kernel@...r.kernel.org, kernel-team@...a.com
Subject: Re: [PATCH 0/1] prctl: allow overriding system THP policy to always



On 08/05/2025 12:06, David Hildenbrand wrote:
> On 07.05.25 16:00, Usama Arif wrote:
>> Allowing override of global THP policy per process allows workloads
>> that have shown to benefit from hugepages to do so, without regressing
>> workloads that wouldn't benefit. This will allow such types of
>> workloads to be run/stacked on the same machine.
>>
>> It also helps in rolling out hugepages in hyperscaler configurations
>> for workloads that benefit from them, where a single THP policy is
>> likely to be used across the entire fleet, and prctl will help override it.
>>
>> An advantage of doing it via prctl vs creating a cgroup specific
>> option (like /sys/fs/cgroup/test/memory.transparent_hugepage.enabled) is
>> that this will work even when there are no cgroups present, and my
>> understanding is there is a strong preference of cgroups controls being
>> hierarchical which usually means them having a numerical value.
>>
>>
>> The output and code of test program is below:
>>
>> [root@vm4 vmuser]# echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
>> [root@vm4 vmuser]# echo inherit > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
>> [root@vm4 vmuser]# ./a.out
>> Default THP setting:
>> THP is not set to 'always'.
>> PR_SET_THP_ALWAYS = 1
>> THP is set to 'always'.
>> PR_SET_THP_ALWAYS = 0
>> THP is not set to 'always'.
> 
> Some quick feedback:
> 
> (1) The "always" in PR_SET_THP_ALWAYS does not look future proof. Why wouldn't someone want to force-disable them for a process (-> "never") or set it to some other new mode ("-> defer" that is currently on the list).

Yes agree with this, I think there are 3 different possible ways forward for this which I outlined in [1] in reply to Zi Yan.
I like flags2 approach, but let me know what you think.

[1] https://lore.kernel.org/all/9ed673ad-764f-4f46-84a7-ef98b19d22ec@gmail.com/
> 
> (2) In your example, is the toggle specific to 2M THP or the global toggle ...? Unclear. And that makes this interface also suboptimal.

In this approach you would overwrite inherit folio sizes, and I think thats the right approach. So if you have for e.g. 2M and 16K set to inherit,
and the global one is set to madvise, doing PR_SET_THP_ALWAYS would change those folio to always.

> 
> (3) I'm a bit concerned about interaction with per-VMA settings (the one we already have, and order-specific ones that people were discussing). It's going to be a mess if we have global, per-process, per-vma and then some other policies (ebpf? whatever else?) on top.
> 
> 
> The low-hanging fruit would be a per-process toggle that only controls the existing per-VMA toggle: for example, with the semantics that
> 
> (1) All new (applicable) VMAs start with VM_HUGEPAGE
> (2) All existing (applicable) VMAs that are *not* VM_NOHUGEPAGE become VM_HUGEPAGE.
> 
> 
> We did something similar with PR_SET_MEMORY_MERGE.
> 

For this you mean the prctl command would do for_each_vma and set VM_HUGEPAGE to implement point 2.
For having point 1, I think we will still need extra mm->flags, i.e. MMF_VM_THP_MADVISE/DEFER/ALWAYS/NEVER.

I think it would have the same affect as what this patch is trying to do? But would be just more
expensive in terms of both code changes and the cost of the actual call as you now have to walk
all vmas. On the other hand you wont need the below diff in from v1. I do feel the current approach
in the patch is simpler? But if your point 3 is better in terms of code maintainability, happy to make
it the change to it in v2. 


diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 2f190c90192d..0587dc4b8e2d 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -293,7 +293,8 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
                if (vm_flags & VM_HUGEPAGE)
                        mask |= READ_ONCE(huge_anon_orders_madvise);
                if (hugepage_global_always() ||
-                   ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled()))
+                   ((vm_flags & VM_HUGEPAGE) && hugepage_global_enabled()) ||
+                   test_bit(MMF_THP_ALWAYS, &vma->vm_mm->flags))
                        mask |= READ_ONCE(huge_anon_orders_inherit);
 
                orders &= mask; 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ