lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c600a6c0-aa59-4896-9e0d-3649a32d1771@gmail.com>
Date: Mon, 9 Jun 2025 12:13:33 +0100
From: Usama Arif <usamaarif642@...il.com>
To: Zi Yan <ziy@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, david@...hat.com,
 linux-mm@...ck.org, hannes@...xchg.org, shakeel.butt@...ux.dev,
 riel@...riel.com, baolin.wang@...ux.alibaba.com, lorenzo.stoakes@...cle.com,
 Liam.Howlett@...cle.com, npache@...hat.com, ryan.roberts@....com,
 dev.jain@....com, hughd@...gle.com, linux-kernel@...r.kernel.org,
 linux-doc@...r.kernel.org, kernel-team@...a.com,
 Juan Yescas <jyescas@...gle.com>, Breno Leitao <leitao@...ian.org>
Subject: Re: [RFC] mm: khugepaged: use largest enabled hugepage order for
 min_free_kbytes



On 06/06/2025 17:10, Zi Yan wrote:
> On 6 Jun 2025, at 11:38, Usama Arif wrote:
> 
>> On 06/06/2025 16:18, Zi Yan wrote:
>>> On 6 Jun 2025, at 10:37, Usama Arif wrote:
>>>
>>>> On arm64 machines with 64K PAGE_SIZE, the min_free_kbytes and hence the
>>>> watermarks are evaluated to extremely high values, for e.g. a server with
>>>> 480G of memory, only 2M mTHP hugepage size set to madvise, with the rest
>>>> of the sizes set to never, the min, low and high watermarks evaluate to
>>>> 11.2G, 14G and 16.8G respectively.
>>>> In contrast for 4K PAGE_SIZE of the same machine, with only 2M THP hugepage
>>>> size set to madvise, the min, low and high watermarks evaluate to 86M, 566M
>>>> and 1G respectively.
>>>> This is because set_recommended_min_free_kbytes is designed for PMD
>>>> hugepages (pageblock_order = min(HPAGE_PMD_ORDER, PAGE_BLOCK_ORDER)).
>>>> Such high watermark values can cause performance and latency issues in
>>>> memory bound applications on arm servers that use 64K PAGE_SIZE, eventhough
>>>> most of them would never actually use a 512M PMD THP.
>>>>
>>>> Instead of using HPAGE_PMD_ORDER for pageblock_order use the highest large
>>>> folio order enabled in set_recommended_min_free_kbytes.
>>>> With this patch, when only 2M THP hugepage size is set to madvise for the
>>>> same machine with 64K page size, with the rest of the sizes set to never,
>>>> the min, low and high watermarks evaluate to 2.08G, 2.6G and 3.1G
>>>> respectively. When 512M THP hugepage size is set to madvise for the same
>>>> machine with 64K page size, the min, low and high watermarks evaluate to
>>>> 11.2G, 14G and 16.8G respectively, the same as without this patch.
>>>
>>> Getting pageblock_order involved here might be confusing. I think you just
>>> want to adjust min, low and high watermarks to reasonable values.
>>> Is it OK to rename min_thp_pageblock_nr_pages to min_nr_free_pages_per_zone
>>> and move MIGRATE_PCPTYPES * MIGRATE_PCPTYPES inside? Otherwise, the changes
>>> look reasonable to me.
>>
>> Hi Zi,
>>
>> Thanks for the review!
>>
>> I forgot to change it in another place, sorry about that! So can't move
>> MIGRATE_PCPTYPES * MIGRATE_PCPTYPES into the combined function.
>> Have added the additional place where min_thp_pageblock_nr_pages() is called
>> as a fixlet here:
>> https://lore.kernel.org/all/a179fd65-dc3f-4769-9916-3033497188ba@gmail.com/
>>
>> I think atleast in this context the orginal name pageblock_nr_pages isn't
>> correct as its min(HPAGE_PMD_ORDER, PAGE_BLOCK_ORDER).
>> The new name min_thp_pageblock_nr_pages is also not really good, so happy
>> to change it to something appropriate.
> 
> Got it. pageblock is the defragmentation granularity. If user only wants
> 2MB mTHP, maybe pageblock order should be adjusted. Otherwise,
> kernel will defragment at 512MB granularity, which might not be efficient.
> Maybe make pageblock_order a boot time parameter?
> 
> In addition, we are mixing two things together:
> 1. min, low, and high watermarks: they affect when memory reclaim and compaction
>    will be triggered;
> 2. pageblock order: it is the granularity of defragmentation for creating
>    mTHP/THP.
> 
> In your use case, you want to lower watermarks, right? Considering what you
> said below, I wonder if we want a way of enforcing vm.min_free_kbytes,
> like a new sysctl knob, vm.force_min_free_kbytes (yeah the suggestion
> is lame, sorry).
> 
> I think for 2, we might want to decouple pageblock order from defragmentation
> granularity.
> 

This is a good point. I only did it for the watermarks in the RFC, but there
is no reason that the defrag granularity is done in 512M chunks and is probably
very inefficient to do so?

Instead of replacing the pageblock_nr_pages for just set_recommended_min_free_kbytes,
maybe we just need to change the definition of pageblock_order in [1] to take into
account the highest large folio order enabled instead of HPAGE_PMD_ORDER?

[1] https://elixir.bootlin.com/linux/v6.15.1/source/include/linux/pageblock-flags.h#L50

I really want to avoid coming up with a solution that requires changing a Kconfig or needs
kernel commandline to change. It would mean a reboot whenever a different workload
runs on a server that works optimally with a different THP size, and that would make
workload orchestration a nightmare.


> 
>>>
>>> Another concern on tying watermarks to highest THP order is that if
>>> user enables PMD THP on such systems with 2MB mTHP enabled initially,
>>> it could trigger unexpected memory reclaim and compaction, right?
>>> That might surprise user, since they just want to adjust availability
>>> of THP sizes, but the whole system suddenly begins to be busy.
>>> Have you experimented with it?
>>>
>>
>> Yes I would imagine it would trigger reclaim and compaction if the system memory
>> is too low, but that should hopefully be expected? If the user is enabling 512M
>> THP, they should expect changes by kernel to allow them to give hugepage of
>> that size.
>> Also hopefully, no one is enabling PMD THPs when the system is so low on
>> memory that it triggers reclaim! There would be an OOM after just a few
>> of those are faulted in.
> 
> 
> 
> Best Regards,
> Yan, Zi


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ