[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <021dfe38-f786-46d0-a43f-769aff07b3f0@redhat.com>
Date: Tue, 20 May 2025 11:10:04 +0200
From: David Hildenbrand <david@...hat.com>
To: Dev Jain <dev.jain@....com>, ryan.roberts@....com
Cc: anshuman.khandual@....com, catalin.marinas@....com,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
mark.rutland@....com, stable@...r.kernel.org, will@...nel.org,
yang@...amperecomputing.com
Subject: Re: [PATCH v2] arm64: Restrict pagetable teardown to avoid false
warning
On 20.05.25 11:05, Dev Jain wrote:
> On 19/05/2025 13:16, David Hildenbrand wrote:
>> On 19.05.25 11:08, Ryan Roberts wrote:
>>> On 18/05/2025 10:54, Dev Jain wrote:
>>>> Commit 9c006972c3fe removes the pxd_present() checks because the caller
>>>
>>> nit: please use the standard format for describing commits: Commit 9c006972c3fe
>>> ("arm64: mmu: drop pXd_present() checks from pXd_free_pYd_table()")
>>>
>>>> checks pxd_present(). But, in case of vmap_try_huge_pud(), the caller only
>>>> checks pud_present(); pud_free_pmd_page() recurses on each pmd through
>>>> pmd_free_pte_page(), wherein the pmd may be none. Thus it is possible to
>>>> hit a warning in the latter, since pmd_none => !pmd_table(). Thus, add
>>>> a pmd_present() check in pud_free_pmd_page().
>>>>
>>>> This problem was found by code inspection.
>>>>
>>>> This patch is based on 6.15-rc6.
>>>
>>> nit: please remove this to below the "---", its not part of the commit log.
>>>
>>>>
>>>> Fixes: 9c006972c3fe (arm64: mmu: drop pXd_present() checks from
>>>> pXd_free_pYd_table())
>>>>
>>>
>>> nit: remove empty line; the tags should all be in a single block with no empty
>>> lines.
>>>
>>>> Cc: <stable@...r.kernel.org>
>>>> Reported-by: Ryan Roberts <ryan.roberts@....com>
>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>> ---
>>>> v1->v2:
>>>> - Enforce check in caller
>>>>
>>>> arch/arm64/mm/mmu.c | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>> index ea6695d53fb9..5b1f4cd238ca 100644
>>>> --- a/arch/arm64/mm/mmu.c
>>>> +++ b/arch/arm64/mm/mmu.c
>>>> @@ -1286,7 +1286,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
>>>> next = addr;
>>>> end = addr + PUD_SIZE;
>>>> do {
>>>> - pmd_free_pte_page(pmdp, next);
>>>> + if (pmd_present(*pmdp))
>>>
>>> pmd_free_pte_page() is using READ_ONCE() to access the *pmdp to ensure it can't
>>> be torn. I suspect we don't technically need that in these functions because
>>> there can be no race with a writer.
>>
>> Yeah, if there is no proper locking in place the function would already
>> seriously mess up (double freeing etc).
>
> Indeed; there is no locking, but this portion of the vmalloc VA space has been
> allocated to us exclusively, so we know there can be no one else racing.
>
>>
>>> But the arm64 arch code always uses
>>> READ_ONCE() for dereferencing pgtable entries for safely. Perhaps we should be
>>> consistent here?
>>
>> mm/vmalloc.c: if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
>
> Yes, I saw that. I know that we don't technically need READ_ONCE(). I'm just
> proposng that for arm64 code we should be consistent with what it already does.
> See Commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing
> page tables")
>
> So I'll just use pmdp_get()?
Maybe that's the cleanest approach. Likely also common code should use
that at some point @Ryan?
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists