[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251009015839.3460231-2-samuel.holland@sifive.com>
Date: Wed, 8 Oct 2025 18:57:37 -0700
From: Samuel Holland <samuel.holland@...ive.com>
To: Palmer Dabbelt <palmer@...belt.com>,
Paul Walmsley <pjw@...nel.org>,
linux-riscv@...ts.infradead.org
Cc: devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Conor Dooley <conor@...nel.org>,
Alexandre Ghiti <alex@...ti.fr>,
Emil Renner Berthing <kernel@...il.dk>,
Andrew Morton <akpm@...ux-foundation.org>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzk+dt@...nel.org>,
Anshuman Khandual <anshuman.khandual@....com>,
David Hildenbrand <david@...hat.com>,
Dev Jain <dev.jain@....com>,
Lance Yang <lance.yang@...ux.dev>,
SeongJae Park <sj@...nel.org>,
Samuel Holland <samuel.holland@...ive.com>
Subject: [PATCH v2 01/18] mm/ptdump: Replace READ_ONCE() with standard page table accessors
From: Anshuman Khandual <anshuman.khandual@....com>
Replace READ_ONCE() with standard page table accessors i.e pxdp_get() which
anyways default into READ_ONCE() in cases where platform does not override.
Also convert ptep_get_lockless() into ptep_get() as well.
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: David Hildenbrand <david@...hat.com>
Cc: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org
Reviewed-by: Dev Jain <dev.jain@....com>
Acked-by: Lance Yang <lance.yang@...ux.dev>
Acked-by: SeongJae Park <sj@...nel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@....com>
Acked-by: David Hildenbrand <david@...hat.com>
Link: https://lore.kernel.org/r/20251001042502.1400726-1-anshuman.khandual@arm.com/
Signed-off-by: Samuel Holland <samuel.holland@...ive.com>
---
Changes in v2:
- New patch for v2 (taken from LKML)
mm/ptdump.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/ptdump.c b/mm/ptdump.c
index b600c7f864b8b..973020000096c 100644
--- a/mm/ptdump.c
+++ b/mm/ptdump.c
@@ -31,7 +31,7 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- pgd_t val = READ_ONCE(*pgd);
+ pgd_t val = pgdp_get(pgd);
#if CONFIG_PGTABLE_LEVELS > 4 && \
(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS))
@@ -54,7 +54,7 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- p4d_t val = READ_ONCE(*p4d);
+ p4d_t val = p4dp_get(p4d);
#if CONFIG_PGTABLE_LEVELS > 3 && \
(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS))
@@ -77,7 +77,7 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- pud_t val = READ_ONCE(*pud);
+ pud_t val = pudp_get(pud);
#if CONFIG_PGTABLE_LEVELS > 2 && \
(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS))
@@ -100,7 +100,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- pmd_t val = READ_ONCE(*pmd);
+ pmd_t val = pmdp_get(pmd);
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
if (pmd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pte)))
@@ -121,7 +121,7 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
unsigned long next, struct mm_walk *walk)
{
struct ptdump_state *st = walk->private;
- pte_t val = ptep_get_lockless(pte);
+ pte_t val = ptep_get(pte);
if (st->effective_prot_pte)
st->effective_prot_pte(st, val);
--
2.47.2
Powered by blists - more mailing lists