[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7fc9a78423fceda0bfd07f80583a7c9c0938e339.1728954719.git.ritesh.list@gmail.com>
Date: Tue, 15 Oct 2024 07:03:28 +0530
From: "Ritesh Harjani (IBM)" <ritesh.list@...il.com>
To: linuxppc-dev@...ts.ozlabs.org
Cc: kasan-dev@...glegroups.com,
linux-mm@...ck.org,
Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Heiko Carstens <hca@...ux.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>,
Nicholas Piggin <npiggin@...il.com>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Christophe Leroy <christophe.leroy@...roup.eu>,
Hari Bathini <hbathini@...ux.ibm.com>,
"Aneesh Kumar K . V" <aneesh.kumar@...nel.org>,
Donet Tom <donettom@...ux.vnet.ibm.com>,
Pavithra Prakash <pavrampu@...ux.vnet.ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
"Ritesh Harjani (IBM)" <ritesh.list@...il.com>
Subject: [RFC RESEND v2 05/13] book3s64/hash: Add hash_debug_pagealloc_add_slot() function
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@...il.com>
---
arch/powerpc/mm/book3s64/hash_utils.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 82151fff9648..6e3860224351 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -328,6 +328,14 @@ static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi)
mmu_kernel_ssize, 0);
}
+static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr, int slot)
+{
+ if (!debug_pagealloc_enabled())
+ return;
+ if ((paddr >> PAGE_SHIFT) < linear_map_hash_count)
+ linear_map_hash_slots[paddr >> PAGE_SHIFT] = slot | 0x80;
+}
+
int hash__kernel_map_pages(struct page *page, int numpages, int enable)
{
unsigned long flags, vaddr, lmi;
@@ -353,6 +361,7 @@ int hash__kernel_map_pages(struct page *page, int numpages,
{
return 0;
}
+static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr, int slot) {}
#endif /* CONFIG_DEBUG_PAGEALLOC */
/*
@@ -513,9 +522,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
break;
cond_resched();
- if (debug_pagealloc_enabled() &&
- (paddr >> PAGE_SHIFT) < linear_map_hash_count)
- linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80;
+ hash_debug_pagealloc_add_slot(paddr, ret);
}
return ret < 0 ? ret : 0;
}
--
2.46.0
Powered by blists - more mailing lists