[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190227170608.27963-9-steven.price@arm.com>
Date: Wed, 27 Feb 2019 17:05:42 +0000
From: Steven Price <steven.price@....com>
To: linux-mm@...ck.org
Cc: Steven Price <steven.price@....com>,
Andy Lutomirski <luto@...nel.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
Catalin Marinas <catalin.marinas@....com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Ingo Molnar <mingo@...hat.com>,
James Morse <james.morse@....com>,
Jérôme Glisse <jglisse@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will.deacon@....com>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Mark Rutland <Mark.Rutland@....com>,
"Liang, Kan" <kan.liang@...ux.intel.com>,
Tony Luck <tony.luck@...el.com>,
Fenghua Yu <fenghua.yu@...el.com>, linux-ia64@...r.kernel.org
Subject: [PATCH v3 08/34] ia64: mm: Add p?d_large() definitions
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.
For ia64 leaf entries are always at the lowest level, so implement
stubs returning 0.
CC: Tony Luck <tony.luck@...el.com>
CC: Fenghua Yu <fenghua.yu@...el.com>
CC: linux-ia64@...r.kernel.org
Signed-off-by: Steven Price <steven.price@....com>
---
arch/ia64/include/asm/pgtable.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
index b1e7468eb65a..84dda295391b 100644
--- a/arch/ia64/include/asm/pgtable.h
+++ b/arch/ia64/include/asm/pgtable.h
@@ -271,6 +271,7 @@ extern unsigned long VMALLOC_END;
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (!ia64_phys_addr_valid(pmd_val(pmd)))
#define pmd_present(pmd) (pmd_val(pmd) != 0UL)
+#define pmd_large(pmd) (0)
#define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0UL)
#define pmd_page_vaddr(pmd) ((unsigned long) __va(pmd_val(pmd) & _PFN_MASK))
#define pmd_page(pmd) virt_to_page((pmd_val(pmd) + PAGE_OFFSET))
@@ -278,6 +279,7 @@ extern unsigned long VMALLOC_END;
#define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!ia64_phys_addr_valid(pud_val(pud)))
#define pud_present(pud) (pud_val(pud) != 0UL)
+#define pud_large(pud) (0)
#define pud_clear(pudp) (pud_val(*(pudp)) = 0UL)
#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & _PFN_MASK))
#define pud_page(pud) virt_to_page((pud_val(pud) + PAGE_OFFSET))
@@ -286,6 +288,7 @@ extern unsigned long VMALLOC_END;
#define pgd_none(pgd) (!pgd_val(pgd))
#define pgd_bad(pgd) (!ia64_phys_addr_valid(pgd_val(pgd)))
#define pgd_present(pgd) (pgd_val(pgd) != 0UL)
+#define pgd_large(pgd) (0)
#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0UL)
#define pgd_page_vaddr(pgd) ((unsigned long) __va(pgd_val(pgd) & _PFN_MASK))
#define pgd_page(pgd) virt_to_page((pgd_val(pgd) + PAGE_OFFSET))
--
2.20.1
Powered by blists - more mailing lists