[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cac9bf3c-5af1-41be-86a5-bf76384b5e3b@arm.com>
Date: Tue, 29 Apr 2025 16:02:10 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: David Hildenbrand <david@...hat.com>, Petr Vaněk
<arkamar@...as.cz>, linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
stable@...r.kernel.org
Subject: Re: [PATCH 1/1] mm: Fix folio_pte_batch() overcount with zero PTEs
On 29/04/2025 15:46, David Hildenbrand wrote:
> On 29.04.25 16:41, Ryan Roberts wrote:
>> On 29/04/2025 15:29, David Hildenbrand wrote:
>>> On 29.04.25 16:22, Petr Vaněk wrote:
>>>> folio_pte_batch() could overcount the number of contiguous PTEs when
>>>> pte_advance_pfn() returns a zero-valued PTE and the following PTE in
>>>> memory also happens to be zero. The loop doesn't break in such a case
>>>> because pte_same() returns true, and the batch size is advanced by one
>>>> more than it should be.
>>>>
>>>> To fix this, bail out early if a non-present PTE is encountered,
>>>> preventing the invalid comparison.
>>>>
>>>> This issue started to appear after commit 10ebac4f95e7 ("mm/memory:
>>>> optimize unmap/zap with PTE-mapped THP") and was discovered via git
>>>> bisect.
>>>>
>>>> Fixes: 10ebac4f95e7 ("mm/memory: optimize unmap/zap with PTE-mapped THP")
>>>> Cc: stable@...r.kernel.org
>>>> Signed-off-by: Petr Vaněk <arkamar@...as.cz>
>>>> ---
>>>> mm/internal.h | 2 ++
>>>> 1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>> index e9695baa5922..c181fe2bac9d 100644
>>>> --- a/mm/internal.h
>>>> +++ b/mm/internal.h
>>>> @@ -279,6 +279,8 @@ static inline int folio_pte_batch(struct folio *folio,
>>>> unsigned long addr,
>>>> dirty = !!pte_dirty(pte);
>>>> pte = __pte_batch_clear_ignored(pte, flags);
>>>> + if (!pte_present(pte))
>>>> + break;
>>>> if (!pte_same(pte, expected_pte))
>>>> break;
>>>
>>> How could pte_same() suddenly match on a present and non-present PTE.
>>>
>>> Something with XEN is really problematic here.
>>>
>>
>> We are inside a lazy MMU region (arch_enter_lazy_mmu_mode()) at this point,
>> which I believe XEN uses. If a PTE was written then read back while in lazy mode
>> you could get a stale value.
>>
>> See
>> https://lore.kernel.org/all/912c7a32-b39c-494f-a29c-4865cd92aeba@agordeev.local/
>> for an example bug.
>
> So if we cannot trust ptep_get() output, then, ... how could we trust anything
> here and ever possibly batch?
The point is that for a write followed by a read to the same PTE, the read may
not return what was written. It could return the value of the PTE at the point
of entry into the lazy mmu mode.
I guess one quick way to test is to hack out lazy mmu support. Something like
this? (totally untested):
----8<----
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index c4c23190925c..1f0a1a713072 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -541,22 +541,6 @@ static inline void arch_end_context_switch(struct
task_struct *next)
PVOP_VCALL1(cpu.end_context_switch, next);
}
-#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
-static inline void arch_enter_lazy_mmu_mode(void)
-{
- PVOP_VCALL0(mmu.lazy_mode.enter);
-}
-
-static inline void arch_leave_lazy_mmu_mode(void)
-{
- PVOP_VCALL0(mmu.lazy_mode.leave);
-}
-
-static inline void arch_flush_lazy_mmu_mode(void)
-{
- PVOP_VCALL0(mmu.lazy_mode.flush);
-}
-
static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
phys_addr_t phys, pgprot_t flags)
{
----8<----
Powered by blists - more mailing lists