[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250221094449.1188427-3-anshuman.khandual@arm.com>
Date: Fri, 21 Feb 2025 15:14:49 +0530
From: Anshuman Khandual <anshuman.khandual@....com>
To: linux-arm-kernel@...ts.infradead.org
Cc: Anshuman Khandual <anshuman.khandual@....com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>,
Ard Biesheuvel <ardb@...nel.org>,
Ryan Roberts <ryan.roberts@....com>,
Mark Rutland <mark.rutland@....com>,
linux-kernel@...r.kernel.org
Subject: [PATCH 2/2] arm64/mm/hotplug: Replace pxx_present() with pxx_valid()
pte_present() asserts either the entry has PTE_VALID or PTE_PRESENT_INVALID
bit set. Although PTE_PRESENT_INVALID bit only gets set on user space page
table entries to represent pxx_present_invalid() state. So present invalid
state is not really possible in kernel page table entries including linear
and vmemap mapping which get teared down during memory hot remove operation
. Hence just check for pxx_valid() instead of pxx_present() in all relevant
places.
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Will Deacon <will@...nel.org>
Cc: Ard Biesheuvel <ardb@...nel.org>
Cc: Ryan Roberts <ryan.roberts@....com>
Cc: Mark Rutland <mark.rutland@....com>
Cc: linux-arm-kernel@...ts.infradead.org
Cc: linux-kernel@...r.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
---
arch/arm64/mm/mmu.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 66906c45c7f6..33a8b77b5e6b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -863,7 +863,7 @@ static void unmap_hotplug_pte_range(pmd_t *pmdp, unsigned long addr,
if (pte_none(pte))
continue;
- WARN_ON(!pte_present(pte));
+ WARN_ON(!pte_valid(pte));
__pte_clear(&init_mm, addr, ptep);
flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
if (free_mapped)
@@ -886,7 +886,7 @@ static void unmap_hotplug_pmd_range(pud_t *pudp, unsigned long addr,
if (pmd_none(pmd))
continue;
- WARN_ON(!pmd_present(pmd));
+ WARN_ON(!pmd_valid(pmd));
if (pmd_sect(pmd)) {
pmd_clear(pmdp);
@@ -919,7 +919,7 @@ static void unmap_hotplug_pud_range(p4d_t *p4dp, unsigned long addr,
if (pud_none(pud))
continue;
- WARN_ON(!pud_present(pud));
+ WARN_ON(!pud_valid(pud));
if (pud_sect(pud)) {
pud_clear(pudp);
@@ -1032,7 +1032,7 @@ static void free_empty_pmd_table(pud_t *pudp, unsigned long addr,
if (pmd_none(pmd))
continue;
- WARN_ON(!pmd_present(pmd) || !pmd_table(pmd) || pmd_sect(pmd));
+ WARN_ON(!pmd_valid(pmd) || !pmd_table(pmd) || pmd_sect(pmd));
free_empty_pte_table(pmdp, addr, next, floor, ceiling);
} while (addr = next, addr < end);
@@ -1072,7 +1072,7 @@ static void free_empty_pud_table(p4d_t *p4dp, unsigned long addr,
if (pud_none(pud))
continue;
- WARN_ON(!pud_present(pud) || !pud_table(pud) || pud_sect(pud));
+ WARN_ON(!pud_valid(pud) || !pud_table(pud) || pud_sect(pud));
free_empty_pmd_table(pudp, addr, next, floor, ceiling);
} while (addr = next, addr < end);
--
2.30.2
Powered by blists - more mailing lists