lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aadf50b1-151b-41c6-b60c-5f1f2a4f2d8e@redhat.com>
Date: Fri, 5 Sep 2025 17:28:28 +0200
From: David Hildenbrand <david@...hat.com>
To: Usama Arif <usamaarif642@...il.com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>, Zi Yan <ziy@...dia.com>,
 Baolin Wang <baolin.wang@...ux.alibaba.com>,
 "Liam R. Howlett" <Liam.Howlett@...cle.com>, Nico Pache <npache@...hat.com>,
 Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
 Barry Song <baohua@...nel.org>
Subject: Re: [PATCH v1] mm/huge_memory: fix shrinking of all-zero THPs with
 max_ptes_none default

On 05.09.25 17:16, Usama Arif wrote:
> 
> 
> On 05/09/2025 16:04, David Hildenbrand wrote:
>> On 05.09.25 17:01, Usama Arif wrote:
>>>
>>>
>>> On 05/09/2025 15:58, David Hildenbrand wrote:
>>>> On 05.09.25 16:53, Usama Arif wrote:
>>>>>
>>>>>
>>>>> On 05/09/2025 15:46, David Hildenbrand wrote:
>>>>>> [...]
>>>>>>
>>>>>>>
>>>>>>> The reason I did this is for the case if you change max_ptes_none after the THP is added
>>>>>>> to deferred split list but *before* memory pressure, i.e. before the shrinker runs,
>>>>>>> so that its considered for splitting.
>>>>>>
>>>>>> Yeah, I was assuming that was the reason why the shrinker is enabled as default.
>>>>>>
>>>>>> But in any sane system, the admin would enable the shrinker early. If not, we can look into handling it differently.
>>>>>
>>>>> Yes, I do this as well, i.e. have a low value from the start.
>>>>>
>>>>> Does it make sense to disable shrinker if max_ptes_none is 511? It wont shrink
>>>>> the usecase you are describing below, but we wont encounter the increased CPU usage.>
>>>>
>>>> I don't really see why we should do that.
>>>>
>>>> If the shrinker is a problem than the shrinker should be disabled. But if it is enabled, we should be shrinking as documented.
>>>>
>>>> Without more magic around our THP toggles (we want less) :)
>>>>
>>>> Shrinking happens when we are under memory pressure, so I am not really sure how relevant the scanning bit is, and if it is relevant enought to change the shrinker default.
>>>>
>>>
>>> yes agreed, I also dont have numbers to back up my worry, its all theoretical :)
>>
>> BTW, I was also wondering if we should just always add all THP to the deferred split list, and make the split toggle just affect whether we process them or not (scan or not).
>>
>> I mean, as a default we add all of them to the list already right now, even though nothing would ever get reclaimed as default.
>>
>> What's your take?
>>
> 
> hmm I probably didnt understand what you meant to say here:
> we already add all of them to the list in __do_huge_pmd_anonymous_page and collapse_huge_page and
> shrink_underused sets/clears split_underused_thp in deferred_split_folio decides whether we process or not.

This is what I mean:

commit 3952b6f6b671ca7d69fd1783b1abf4806f90d436 (HEAD -> max_ptes_none)
Author: David Hildenbrand <david@...hat.com>
Date:   Fri Sep 5 17:22:01 2025 +0200

     mm/huge_memory: always add THPs to the deferred split list
     
     When disabling the shrinker and then re-enabling it, any anon THPs
     allocated in the meantime.
     
     That also means that we cannot disable the shrinker as default during
     boot, because we would miss some THPs later when enabling it.
     
     So always add them to the deferred split list, and only skip the
     scanning if the shrinker is disabled.
     
     This is effectively what we do on all systems out there already, unless
     they disable the shrinker.
     
     Signed-off-by: David Hildenbrand <david@...hat.com>

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index aa3ed7a86435b..3ee857c1d3754 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4052,9 +4052,6 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
         if (folio_order(folio) <= 1)
                 return;
  
-       if (!partially_mapped && !split_underused_thp)
-               return;
-
         /*
          * Exclude swapcache: originally to avoid a corrupt deferred split
          * queue. Nowadays that is fully prevented by memcg1_swapout();
@@ -4175,6 +4172,8 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
                 bool underused = false;
  
                 if (!folio_test_partially_mapped(folio)) {
+                       if (!split_underused_thp)
+                               goto next;
                         underused = thp_underused(folio);
                         if (!underused)
                                 goto next;


-- 
Cheers

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ