[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <202543016037-aBJJJdupFVd_6FTX-arkamar@atlas.cz>
Date: Wed, 30 Apr 2025 18:00:37 +0200
From: Petr Vaněk <arkamar@...as.cz>
To: David Hildenbrand <david@...hat.com>
Cc: linux-kernel@...r.kernel.org, Andrew Morton <akpm@...ux-foundation.org>,
Ryan Roberts <ryan.roberts@....com>, linux-mm@...ck.org,
stable@...r.kernel.org
Subject: Re: [PATCH 1/1] mm: Fix folio_pte_batch() overcount with zero PTEs
On Wed, Apr 30, 2025 at 04:37:21PM +0200, David Hildenbrand wrote:
> On 30.04.25 13:52, Petr Vaněk wrote:
> > On Tue, Apr 29, 2025 at 08:56:03PM +0200, David Hildenbrand wrote:
> >> On 29.04.25 20:33, Petr Vaněk wrote:
> >>> On Tue, Apr 29, 2025 at 05:45:53PM +0200, David Hildenbrand wrote:
> >>>> On 29.04.25 16:52, David Hildenbrand wrote:
> >>>>> On 29.04.25 16:45, Petr Vaněk wrote:
> >>>>>> On Tue, Apr 29, 2025 at 04:29:30PM +0200, David Hildenbrand wrote:
> >>>>>>> On 29.04.25 16:22, Petr Vaněk wrote:
> >>>>>>>> folio_pte_batch() could overcount the number of contiguous PTEs when
> >>>>>>>> pte_advance_pfn() returns a zero-valued PTE and the following PTE in
> >>>>>>>> memory also happens to be zero. The loop doesn't break in such a case
> >>>>>>>> because pte_same() returns true, and the batch size is advanced by one
> >>>>>>>> more than it should be.
> >>>>>>>>
> >>>>>>>> To fix this, bail out early if a non-present PTE is encountered,
> >>>>>>>> preventing the invalid comparison.
> >>>>>>>>
> >>>>>>>> This issue started to appear after commit 10ebac4f95e7 ("mm/memory:
> >>>>>>>> optimize unmap/zap with PTE-mapped THP") and was discovered via git
> >>>>>>>> bisect.
> >>>>>>>>
> >>>>>>>> Fixes: 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP")
> >>>>>>>> Cc: stable@...r.kernel.org
> >>>>>>>> Signed-off-by: Petr Vaněk <arkamar@...as.cz>
> >>>>>>>> ---
> >>>>>>>> mm/internal.h | 2 ++
> >>>>>>>> 1 file changed, 2 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/mm/internal.h b/mm/internal.h
> >>>>>>>> index e9695baa5922..c181fe2bac9d 100644
> >>>>>>>> --- a/mm/internal.h
> >>>>>>>> +++ b/mm/internal.h
> >>>>>>>> @@ -279,6 +279,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
> >>>>>>>> dirty = !!pte_dirty(pte);
> >>>>>>>> pte = __pte_batch_clear_ignored(pte, flags);
> >>>>>>>>
> >>>>>>>> + if (!pte_present(pte))
> >>>>>>>> + break;
> >>>>>>>> if (!pte_same(pte, expected_pte))
> >>>>>>>> break;
> >>>>>>>
> >>>>>>> How could pte_same() suddenly match on a present and non-present PTE.
> >>>>>>
> >>>>>> In the problematic case pte.pte == 0 and expected_pte.pte == 0 as well.
> >>>>>> pte_same() returns a.pte == b.pte -> 0 == 0. Both are non-present PTEs.
> >>>>>
> >>>>> Observe that folio_pte_batch() was called *with a present pte*.
> >>>>>
> >>>>> do_zap_pte_range()
> >>>>> if (pte_present(ptent))
> >>>>> zap_present_ptes()
> >>>>> folio_pte_batch()
> >>>>>
> >>>>> How can we end up with an expected_pte that is !present, if it is based
> >>>>> on the provided pte that *is present* and we only used pte_advance_pfn()
> >>>>> to advance the pfn?
> >>>>
> >>>> I've been staring at the code for too long and don't see the issue.
> >>>>
> >>>> We even have
> >>>>
> >>>> VM_WARN_ON_FOLIO(!pte_present(pte), folio);
> >>>>
> >>>> So the initial pteval we got is present.
> >>>>
> >>>> I don't see how
> >>>>
> >>>> nr = pte_batch_hint(start_ptep, pte);
> >>>> expected_pte = __pte_batch_clear_ignored(pte_advance_pfn(pte, nr), flags);
> >>>>
> >>>> would suddenly result in !pte_present(expected_pte).
> >>>
> >>> The issue is not happening in __pte_batch_clear_ignored but later in
> >>> following line:
> >>>
> >>> expected_pte = pte_advance_pfn(expected_pte, nr);
> >>>
> >>> The issue seems to be in __pte function which converts PTE value to
> >>> pte_t in pte_advance_pfn, because warnings disappears when I change the
> >>> line to
> >>>
> >>> expected_pte = (pte_t){ .pte = pte_val(expected_pte) + (nr << PFN_PTE_SHIFT) };
> >>>
> >>> The kernel probably uses __pte function from
> >>> arch/x86/include/asm/paravirt.h because it is configured with
> >>> CONFIG_PARAVIRT=y:
> >>>
> >>> static inline pte_t __pte(pteval_t val)
> >>> {
> >>> return (pte_t) { PVOP_ALT_CALLEE1(pteval_t, mmu.make_pte, val,
> >>> "mov %%rdi, %%rax", ALT_NOT_XEN) };
> >>> }
> >>>
> >>> I guess it might cause this weird magic, but I need more time to
> >>> understand what it does :)
> >
> > I understand it slightly more. __pte() uses xen_make_pte(), which calls
> > pte_pfn_to_mfn(), however, mfn for this pfn contains INVALID_P2M_ENTRY
> > value, therefore the pte_pfn_to_mfn() returns 0, see [1].
> >
> > I guess that the mfn was invalidated by xen-balloon driver?
> >
> > [1] https://web.git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/x86/xen/mmu_pv.c?h=v6.15-rc4#n408
> >
> >> What XEN does with basic primitives that convert between pteval and
> >> pte_t is beyond horrible.
> >>
> >> How come set_ptes() that uses pte_next_pfn()->pte_advance_pfn() does not
> >> run into this?
> >
> > I don't know, but I guess it is somehow related to pfn->mfn translation.
> >
> >> Is it only a problem if we exceed a certain pfn?
> >
> > No, it is a problem if the corresponding mft to given pfn is invalid.
> >
> > I am not sure if my original patch is a good fix.
>
> No :)
>
> Maybe it would be
> > better to have some sort of native_pte_advance_pfn() which will use
> > native_make_pte() rather than __pte(). Or do you think the issue is in
> > Xen part?
>
> I think what's happening is that -- under XEN only -- we might get garbage when
> calling pte_advance_pfn() and the next PFN would no longer fall into the folio. And
> the current code cannot deal with that XEN garbage.
>
> But still not 100% sure.
>
> The following is completely untested, could you give that a try?
Yes, it solves the issue for me.
However, maybe it would be better to solve it with the following patch.
The pte_pfn_to_mfn() actually returns the same value for non-present
PTEs. I suggest to return original PTE if the mfn is INVALID_P2M_ENTRY,
rather than empty non-present PTE, but the _PAGE_PRESENT bit will be set
to zero. Thus, we will not loose information about original pfn but it
will be clear that the page is not present.
>From e84781f9ec4fb7275d5e7629cf7e222466caf759 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Petr=20Van=C4=9Bk?= <arkamar@...as.cz>
Date: Wed, 30 Apr 2025 17:08:41 +0200
Subject: [PATCH] x86/mm: Reset pte _PAGE_PRESENT bit for invalid mft
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: Petr Vaněk <arkamar@...as.cz>
---
arch/x86/xen/mmu_pv.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 38971c6dcd4b..92a6a9af0c65 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -392,28 +392,25 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
static pteval_t pte_pfn_to_mfn(pteval_t val)
{
if (val & _PAGE_PRESENT) {
unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
pteval_t flags = val & PTE_FLAGS_MASK;
unsigned long mfn;
mfn = __pfn_to_mfn(pfn);
/*
- * If there's no mfn for the pfn, then just create an
- * empty non-present pte. Unfortunately this loses
- * information about the original pfn, so
- * pte_mfn_to_pfn is asymmetric.
+ * If there's no mfn for the pfn, then just reset present pte bit.
*/
if (unlikely(mfn == INVALID_P2M_ENTRY)) {
- mfn = 0;
- flags = 0;
+ mfn = pfn;
+ flags &= ~_PAGE_PRESENT;
} else
mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
}
return val;
}
__visible pteval_t xen_pte_val(pte_t pte)
{
--
2.48.1
Powered by blists - more mailing lists