[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20191206135316.47703-14-steven.price@arm.com>
Date: Fri, 6 Dec 2019 13:53:04 +0000
From: Steven Price <steven.price@....com>
To: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc: Steven Price <steven.price@....com>,
Andy Lutomirski <luto@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
James Morse <james.morse@....com>,
Jérôme Glisse <jglisse@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Mark Rutland <Mark.Rutland@....com>,
"Liang, Kan" <kan.liang@...ux.intel.com>
Subject: [PATCH v16 13/25] mm: pagewalk: Don't lock PTEs for walk_page_range_novma()
walk_page_range_novma() can be used to walk page tables or the kernel or
for firmware. These page tables may contain entries that are not backed
by a struct page and so it isn't (in general) possible to take the PTE
lock for the pte_entry() callback. So update walk_pte_range() to only
take the lock when no_vma==false and add a comment explaining the
difference to walk_page_range_novma().
Signed-off-by: Steven Price <steven.price@....com>
---
mm/pagewalk.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index efa464cf079b..1b9a3ba24c51 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -10,9 +10,10 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte_t *pte;
int err = 0;
const struct mm_walk_ops *ops = walk->ops;
- spinlock_t *ptl;
+ spinlock_t *uninitialized_var(ptl);
- pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
+ pte = walk->no_vma ? pte_offset_map(pmd, addr) :
+ pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
for (;;) {
err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
if (err)
@@ -23,7 +24,9 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
pte++;
}
- pte_unmap_unlock(pte, ptl);
+ if (!walk->no_vma)
+ spin_unlock(ptl);
+ pte_unmap(pte);
return err;
}
@@ -383,6 +386,12 @@ int walk_page_range(struct mm_struct *mm, unsigned long start,
return err;
}
+/*
+ * Similar to walk_page_range() but can walk any page tables even if they are
+ * not backed by VMAs. Because 'unusual' entries may be walked this function
+ * will also not lock the PTEs for the pte_entry() callback. This is useful for
+ * walking the kernel pages tables or page tables for firmware.
+ */
int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
unsigned long end, const struct mm_walk_ops *ops,
void *private)
--
2.20.1
Powered by blists - more mailing lists