lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aRRPvn4DYAhuGtq3@localhost.localdomain>
Date: Wed, 12 Nov 2025 10:13:34 +0100
From: Oscar Salvador <osalvador@...e.de>
To: Hugh Dickins <hughd@...gle.com>
Cc: Muchun Song <muchun.song@...ux.dev>,
	David Hildenbrand <david@...hat.com>,
	Deepanshu Kartikey <kartikey406@...il.com>,
	Vivek Kasireddy <vivek.kasireddy@...el.com>,
	baolin.wang@...ux.alibaba.com, akpm@...ux-foundation.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org,
	syzbot+f64019ba229e3a5c411b@...kaller.appspotmail.com
Subject: Re: [PATCH] mm/memfd: clear hugetlb pages on allocation

On Tue, Nov 11, 2025 at 10:55:03PM -0800, Hugh Dickins wrote:
> Thanks a lot, Deepanshu and syzbot: this sounds horrid, and important
> to fix very soon; and wlll need a Fixes tag (with stable Cc'ed when
> the fix goes into mm.git), I presume it's
> 
> Fixes: 89c1905d9c14 ("mm/gup: introduce memfd_pin_folios() for pinning memfd folios")
> 
> But although my name appears against mm/memfd.c, the truth is I know
> little of hugetlb (maintainers now addressed), and when its folios
> are supposed to get zeroed (would a __GFP_ZERO somewhere be better?).
> 
> I was puzzled by how udmabuf came into the picture, since hugetlbfs
> has always supported the read (not write) system call: but see now
> that there is this surprising backdoor into the hugetlb subsystem,
> via memfd and GUP pinning.
> 
> And where does that folio get marked uptodate, or is "uptodate"
> irrelevant on hugetlbfs?  Are the right locks taken, or could
> there be races when adding to hugetlbfs cache in this way?

Thanks Hugh for raising this up.

memfd_alloc_folio() seems to try to recreate what hugetlb_no_page()
would do (slightly different though).

The thing is that as far as I know, we should grab hugetlb mutex before
trying to add a new page in the pagecache, per comment in
hugetlb_fault():

 "
   /*
    * Serialize hugepage allocation and instantiation, so that we don't
    * get spurious allocation failures if two CPUs race to instantiate
    * the same page in the page cache.
    */
 "

and at least that is what all callers of hugetlb_add_to_page_cache() do
at this moment, all except memfd_alloc_folio(), so I guess this one
needs fixing.

Regarding the uptodate question, I do not see what is special about this situation
that we would not need it.
We seem to be marking the folio uptodate every time we do allocate a folio __and__
before adding it into the pagecache (which is expected, right?).

Now, for the GFP_ZERO question.
This one is nasty.
hugetlb_reserve_pages() will allocate surplus folios without zeroing, but those
will be zeroed in the faulting path before mapping them into userspace pagetables
(see folio_zero_user() in hugetlb_no_page()).
So unless I am missing something we need to zero them in this case as well.


-- 
Oscar Salvador
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ