[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181129171230.18699-5-ard.biesheuvel@linaro.org>
Date: Thu, 29 Nov 2018 18:12:23 +0100
From: Ard Biesheuvel <ard.biesheuvel@...aro.org>
To: linux-efi@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Ard Biesheuvel <ard.biesheuvel@...aro.org>,
linux-kernel@...r.kernel.org, Andy Lutomirski <luto@...nel.org>,
Arend van Spriel <arend.vanspriel@...adcom.com>,
Bhupesh Sharma <bhsharma@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...el.com>,
Eric Snowberg <eric.snowberg@...cle.com>,
Hans de Goede <hdegoede@...hat.com>,
Joe Perches <joe@...ches.com>,
Jon Hunter <jonathanh@...dia.com>,
Julien Thierry <julien.thierry@....com>,
Marc Zyngier <marc.zyngier@....com>,
Nathan Chancellor <natechancellor@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Sai Praneeth Prakhya <sai.praneeth.prakhya@...el.com>,
Sedat Dilek <sedat.dilek@...il.com>,
YiFei Zhu <zhuyifei1999@...il.com>
Subject: [PATCH 04/11] x86/mm/pageattr: Introduce helper function to unmap EFI boot services
From: Sai Praneeth Prakhya <sai.praneeth.prakhya@...el.com>
Ideally, after kernel assumes control of the platform, firmware
shouldn't access EFI boot services code/data regions. But, it's noticed
that this is not so true in many x86 platforms. Hence, during boot,
kernel reserves EFI boot services code/data regions [1] and maps [2]
them to efi_pgd so that call to set_virtual_address_map() doesn't fail.
After returning from set_virtual_address_map(), kernel frees the
reserved regions [3] but they still remain mapped. Hence, introduce
kernel_unmap_pages_in_pgd() which will later be used to unmap EFI boot
services code/data regions.
While at it modify kernel_map_pages_in_pgd() by
1. Adding __init modifier because it's always used *only* during boot.
2. Add a warning if it's used after SMP is initialized because it uses
__flush_tlb_all() which flushes mappings only on current CPU.
Unmapping EFI boot services code/data regions will result in clearing
PAGE_PRESENT bit and it shouldn't bother L1TF cases because it's already
handled by protnone_mask() at arch/x86/include/asm/pgtable-invert.h.
[1] efi_reserve_boot_services()
[2] efi_map_region() -> __map_region() -> kernel_map_pages_in_pgd()
[3] efi_free_boot_services()
Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@...el.com>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Dave Hansen <dave.hansen@...el.com>
Cc: Bhupesh Sharma <bhsharma@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Reviewed-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
---
arch/x86/include/asm/pgtable_types.h | 8 ++++--
arch/x86/mm/pageattr.c | 40 ++++++++++++++++++++++++++--
2 files changed, 44 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 106b7d0e2dae..d6ff0bbdb394 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -564,8 +564,12 @@ extern pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
unsigned int *level);
extern pmd_t *lookup_pmd_address(unsigned long address);
extern phys_addr_t slow_virt_to_phys(void *__address);
-extern int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
- unsigned numpages, unsigned long page_flags);
+extern int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn,
+ unsigned long address,
+ unsigned numpages,
+ unsigned long page_flags);
+extern int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
+ unsigned long numpages);
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_PGTABLE_DEFS_H */
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index db7a10082238..bac35001d896 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2338,8 +2338,8 @@ bool kernel_page_present(struct page *page)
#endif /* CONFIG_DEBUG_PAGEALLOC */
-int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
- unsigned numpages, unsigned long page_flags)
+int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
+ unsigned numpages, unsigned long page_flags)
{
int retval = -EINVAL;
@@ -2353,6 +2353,8 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
.flags = 0,
};
+ WARN_ONCE(num_online_cpus() > 1, "Don't call after initializing SMP");
+
if (!(__supported_pte_mask & _PAGE_NX))
goto out;
@@ -2374,6 +2376,40 @@ int kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
return retval;
}
+/*
+ * __flush_tlb_all() flushes mappings only on current CPU and hence this
+ * function shouldn't be used in an SMP environment. Presently, it's used only
+ * during boot (way before smp_init()) by EFI subsystem and hence is ok.
+ */
+int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
+ unsigned long numpages)
+{
+ int retval;
+
+ /*
+ * The typical sequence for unmapping is to find a pte through
+ * lookup_address_in_pgd() (ideally, it should never return NULL because
+ * the address is already mapped) and change it's protections. As pfn is
+ * the *target* of a mapping, it's not useful while unmapping.
+ */
+ struct cpa_data cpa = {
+ .vaddr = &address,
+ .pfn = 0,
+ .pgd = pgd,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
+ .flags = 0,
+ };
+
+ WARN_ONCE(num_online_cpus() > 1, "Don't call after initializing SMP");
+
+ retval = __change_page_attr_set_clr(&cpa, 0);
+ __flush_tlb_all();
+
+ return retval;
+}
+
/*
* The testcases use internal knowledge of the implementation that shouldn't
* be exposed to the rest of the kernel. Include these directly here.
--
2.19.1
Powered by blists - more mailing lists