lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <323ed726-fc69-4d80-a7e8-e3762c161ee1@linux.alibaba.com>
Date: Thu, 14 Aug 2025 18:03:26 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: Hugh Dickins <hughd@...gle.com>
Cc: akpm@...ux-foundation.org, willy@...radead.org, david@...hat.com,
 lorenzo.stoakes@...cle.com, ziy@...dia.com, Liam.Howlett@...cle.com,
 npache@...hat.com, ryan.roberts@....com, dev.jain@....com,
 baohua@...nel.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] mm: shmem: fix the strategy for the tmpfs 'huge='
 options



On 2025/8/13 14:59, Hugh Dickins wrote:
> On Tue, 12 Aug 2025, Baolin Wang wrote:
>> On 2025/7/30 16:14, Baolin Wang wrote:
>>> After commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs"),
>>> we have extended tmpfs to allow any sized large folios, rather than just
>>> PMD-sized large folios.
>>>
>>> The strategy discussed previously was:
>>>
>>> "
>>> Considering that tmpfs already has the 'huge=' option to control the
>>> PMD-sized large folios allocation, we can extend the 'huge=' option to
>>> allow any sized large folios.  The semantics of the 'huge=' mount option
>>> are:
>>>
>>>       huge=never: no any sized large folios
>>>       huge=always: any sized large folios
>>>       huge=within_size: like 'always' but respect the i_size
>>>       huge=advise: like 'always' if requested with madvise()
>>>
>>> Note: for tmpfs mmap() faults, due to the lack of a write size hint, still
>>> allocate the PMD-sized huge folios if huge=always/within_size/advise is
>>> set.
>>>
>>> Moreover, the 'deny' and 'force' testing options controlled by
>>> '/sys/kernel/mm/transparent_hugepage/shmem_enabled', still retain the same
>>> semantics.  The 'deny' can disable any sized large folios for tmpfs, while
>>> the 'force' can enable PMD sized large folios for tmpfs.
>>> "
>>>
>>> This means that when tmpfs is mounted with 'huge=always' or
>>> 'huge=within_size',
>>> tmpfs will allow getting a highest order hint based on the size of write()
>>> and
>>> fallocate() paths. It will then try each allowable large order, rather than
>>> continually attempting to allocate PMD-sized large folios as before.
>>>
>>> However, this might break some user scenarios for those who want to use
>>> PMD-sized large folios, such as the i915 driver which did not supply a write
>>> size hint when allocating shmem [1].
>>>
>>> Moreover, Hugh also complained that this will cause a regression in
>>> userspace
>>> with 'huge=always' or 'huge=within_size'.
>>>
>>> So, let's revisit the strategy for tmpfs large page allocation. A simple fix
>>> would be to always try PMD-sized large folios first, and if that fails, fall
>>> back to smaller large folios. However, this approach differs from the
>>> strategy
>>> for large folio allocation used by other file systems. Is this acceptable?
>>>
>>> [1]
>>> https://lore.kernel.org/lkml/0d734549d5ed073c80b11601da3abdd5223e1889.1753689802.git.baolin.wang@linux.alibaba.com/
>>> Fixes: acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs")
>>> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
>>> ---
>>> Note: this is just an RFC patch. I would like to hear others' opinions or
>>> see if there is a better way to address Hugh's concern.
> 
> Sorry, I am still evaluating this RFC patch.
> 
> Certainly I observe it taking us in the right direction, giving PMD-sized
> pages on tmpfs huge=always, as 6.13 and earlier releases did - thank you.
> 
> But the explosion of combinations which mTHP and FS large folios bring,
> the amount that needs checking, is close to defeating me; and I've had
> to spend a lot of the time re-educating myself on the background -
> not looking to see whether this particular patch is right or not.
> Still working on it.

OK. Thanks.

>> If we use this approach to fix the PMD large folio regression, should we also
>> change tmpfs mmap() to allow allocating any sized large folios, but always try
>> to allocate PMD-sized large folios first? What do you think? Thanks.
> 
> Probably: I would like the mmap allocations to follow the same rules.
> 
> But finding it a bit odd how the current implementation limits tmpfs
> large folios to when huge=notnever (is that a fair statement?), whereas

Yes, this is mainly to ensure backward compatibility with the 'huge=' 
options. Moreover, in the future, we could set the default value of 
‘tmpfs_huge’ to ‘always’ (controlled via the cmdline: 
transparent_hugepage_tmpfs=) to allow all tmpfs mounts to use large 
folios by default.

> other filesystems are now being freely given large folios - using
> different GFP flags from what MM uses (closest to defrag=always I think),
> and with no limitation - whereas MM folks are off devising ever newer
> ways to restrict access to huge pages.
> 
> And (conversely) I am unhappy with the way write and fallocate (and split
> and collapse? in flight I think) are following the FS approach of allowing
> every fractal, when mTHP/shmem_enabled is (or can be) more limiting.  I
> think it less surprising (and more efficient when fragmented) for shmem
> FS operations to be restricted to the same subset as "shared anon".

Understood. We discussed this before, but it didn’t get support :(

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ