[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04c32758-af9b-492a-909a-ad1bc6c0777d@redhat.com>
Date: Mon, 19 May 2025 14:50:50 +0200
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, Dev Jain <dev.jain@....com>,
catalin.marinas@....com, will@...nel.org
Cc: anshuman.khandual@....com, mark.rutland@....com,
yang@...amperecomputing.com, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, stable@...r.kernel.org
Subject: Re: [PATCH v2] arm64: Restrict pagetable teardown to avoid false
warning
On 19.05.25 14:47, Ryan Roberts wrote:
> On 19/05/2025 13:16, David Hildenbrand wrote:
>> On 19.05.25 11:08, Ryan Roberts wrote:
>>> On 18/05/2025 10:54, Dev Jain wrote:
>>>> Commit 9c006972c3fe removes the pxd_present() checks because the caller
>>>
>>> nit: please use the standard format for describing commits: Commit 9c006972c3fe
>>> ("arm64: mmu: drop pXd_present() checks from pXd_free_pYd_table()")
>>>
>>>> checks pxd_present(). But, in case of vmap_try_huge_pud(), the caller only
>>>> checks pud_present(); pud_free_pmd_page() recurses on each pmd through
>>>> pmd_free_pte_page(), wherein the pmd may be none. Thus it is possible to
>>>> hit a warning in the latter, since pmd_none => !pmd_table(). Thus, add
>>>> a pmd_present() check in pud_free_pmd_page().
>>>>
>>>> This problem was found by code inspection.
>>>>
>>>> This patch is based on 6.15-rc6.
>>>
>>> nit: please remove this to below the "---", its not part of the commit log.
>>>
>>>>
>>>> Fixes: 9c006972c3fe (arm64: mmu: drop pXd_present() checks from
>>>> pXd_free_pYd_table())
>>>>
>>>
>>> nit: remove empty line; the tags should all be in a single block with no empty
>>> lines.
>>>
>>>> Cc: <stable@...r.kernel.org>
>>>> Reported-by: Ryan Roberts <ryan.roberts@....com>
>>>> Signed-off-by: Dev Jain <dev.jain@....com>
>>>> ---
>>>> v1->v2:
>>>> - Enforce check in caller
>>>>
>>>> arch/arm64/mm/mmu.c | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>>> index ea6695d53fb9..5b1f4cd238ca 100644
>>>> --- a/arch/arm64/mm/mmu.c
>>>> +++ b/arch/arm64/mm/mmu.c
>>>> @@ -1286,7 +1286,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
>>>> next = addr;
>>>> end = addr + PUD_SIZE;
>>>> do {
>>>> - pmd_free_pte_page(pmdp, next);
>>>> + if (pmd_present(*pmdp))
>>>
>>> pmd_free_pte_page() is using READ_ONCE() to access the *pmdp to ensure it can't
>>> be torn. I suspect we don't technically need that in these functions because
>>> there can be no race with a writer.
>>
>> Yeah, if there is no proper locking in place the function would already
>> seriously mess up (double freeing etc).
>
> Indeed; there is no locking, but this portion of the vmalloc VA space has been
> allocated to us exclusively, so we know there can be no one else racing.
>
>>
>>> But the arm64 arch code always uses
>>> READ_ONCE() for dereferencing pgtable entries for safely. Perhaps we should be
>>> consistent here?
>>
>> mm/vmalloc.c: if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
>
> Yes, I saw that. I know that we don't technically need READ_ONCE(). I'm just
> proposng that for arm64 code we should be consistent with what it already does.
> See Commit 20a004e7b017 ("arm64: mm: Use READ_ONCE/WRITE_ONCE when accessing
> page tables")
The more READ_ONCE() we add, the less the compiler can optimize (e.g.,
when inlining). Apparently, for no good reason in this page table
handling code, because there cannot be concurrent changes that would
turn it !present etc.
Anyhow, something for the arm maintainers to decide.
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists