lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <27a07825-5dc0-449d-99fa-e1d49f9c5be2@lucifer.local>
Date: Wed, 14 Jan 2026 10:01:39 +0000
From: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
To: Chris Mason <clm@...a.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...nel.org>,
        "Liam R . Howlett" <Liam.Howlett@...cle.com>,
        Vlastimil Babka <vbabka@...e.cz>, Mike Rapoport <rppt@...nel.org>,
        Suren Baghdasaryan <surenb@...gle.com>, Michal Hocko <mhocko@...e.com>,
        Jann Horn <jannh@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] mm/madvise: allow guard page install/remove under
 VMA lock

On Tue, Jan 13, 2026 at 02:08:51PM -0800, Chris Mason wrote:
> On Mon, 10 Nov 2025 17:22:58 +0000 Lorenzo Stoakes <lorenzo.stoakes@...cle.com> wrote:
>
> > We only need to keep the page table stable so we can perform this operation
> > under the VMA lock. PTE installation is stabilised via the PTE lock.
> >
> > One caveat is that, if we prepare vma->anon_vma we must hold the mmap read
> > lock. We can account for this by adapting the VMA locking logic to
> > explicitly check for this case and prevent a VMA lock from being acquired
> > should it be the case.
> >
> > This check is safe, as while we might be raced on anon_vma installation,
> > this would simply make the check conservative, there's no way for us to see
> > an anon_vma and then for it to be cleared, as doing so requires the
> > mmap/VMA write lock.
> >
> > We abstract the VMA lock validity logic to is_vma_lock_sufficient() for
> > this purpose, and add prepares_anon_vma() to abstract the anon_vma logic.
> >
> > In order to do this we need to have a way of installing page tables
> > explicitly for an identified VMA, so we export walk_page_range_vma() in an
> > unsafe variant - walk_page_range_vma_unsafe() and use this should the VMA
> > read lock be taken.
> >
> > We additionally update the comments in madvise_guard_install() to more
> > accurately reflect the cases in which the logic may be reattempted,
> > specifically THP huge pages being present.
> >
>
> Hi Lorenzo, one more flagged by AI:
>
> > diff --git a/mm/madvise.c b/mm/madvise.c
> > index 7ed5bedb8f8ef..2a165e9beb5bb 100644
>
> [ ... ]
>
> > @@ -1173,8 +1180,13 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> >  		unsigned long nr_pages = 0;
> >
> >  		/* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
> > -		err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> > -				range->end, &guard_install_walk_ops, &nr_pages);
> > +		if (madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK)
> > +			err = walk_page_range_vma_unsafe(madv_behavior->vma,
> > +					range->start, range->end, &walk_ops,
> > +					&nr_pages);
> > +		else
> > +			err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> > +					range->end, &walk_ops, &nr_pages);
> >  		if (err < 0)
> >  			return err;
> >
> > @@ -1195,8 +1207,7 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> >  	}
>
> When err > 0 (zap needed), zap_page_range_single() is called below
> regardless of lock_mode.  If lock_mode == MADVISE_VMA_READ_LOCK (only VMA
> lock is held), and the zap range contains a THP PUD mapping that requires
> splitting, the call chain:
>
>   madvise_guard_install()
>     -> zap_page_range_single()
>       -> zap_page_range_single_batched()
>         -> unmap_single_vma()
>           -> unmap_page_range()
>             -> zap_p4d_range()
>               -> zap_pud_range()
>
> will hit mmap_assert_locked() in zap_pud_range() (mm/memory.c) when
> pud_trans_huge() is true and partial PUD splitting is required.

Yeah I was already aware of this, and I thought it was handled via
is_valid_guard_vma().

You only get huge PUD THP for DAX or VFIO. VFIO is covered off as that's
VM_SPECIAL but yeah, maybe not dev-dax...

It's a stupid situation really, I don't think we should be asserting that
there but I don't necessarily want to unravel some locking horror show at
this point, will put on TODO to come back to that.

>
> The assertion will trigger a WARN_ON because only the VMA lock is held,
> not the mmap lock.
>
> Is there an additional check needed before calling zap_page_range_single()
> when in VMA lock mode, or should this path fall back to mmap lock when THP
> is possible?

is_valid_guard_vma() is the appropriate place.

I'll add a !vma_is_dax() check there for now and send a patch out.

>
> > @@ -1715,9 +1763,9 @@ static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *madv_behavi
> >  	case MADV_POPULATE_READ:
> >  	case MADV_POPULATE_WRITE:
> >  	case MADV_COLLAPSE:
> > +		return MADVISE_MMAP_READ_LOCK;
> >  	case MADV_GUARD_INSTALL:
> >  	case MADV_GUARD_REMOVE:
> > -		return MADVISE_MMAP_READ_LOCK;
> >  	case MADV_DONTNEED:
> >  	case MADV_DONTNEED_LOCKED:
> >  	case MADV_FREE:
>
> This change moves MADV_GUARD_INSTALL to use MADVISE_VMA_READ_LOCK, but
> zap_page_range_single() called later in madvise_guard_install() may
> require the mmap lock for THP PUD splitting as noted above.

Clever little machine ;)

>
>

Cheers, Lorenzo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ