[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181128000754.18056-2-rick.p.edgecombe@intel.com>
Date: Tue, 27 Nov 2018 16:07:53 -0800
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: akpm@...ux-foundation.org, luto@...nel.org, will.deacon@....com,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
kernel-hardening@...ts.openwall.com,
naveen.n.rao@...ux.vnet.ibm.com, anil.s.keshavamurthy@...el.com,
davem@...emloft.net, mhiramat@...nel.org, rostedt@...dmis.org,
mingo@...hat.com, ast@...nel.org, daniel@...earbox.net,
jeyu@...nel.org, netdev@...r.kernel.org, ard.biesheuvel@...aro.org,
jannh@...gle.com
Cc: kristen@...ux.intel.com, dave.hansen@...el.com,
deneen.t.dock@...el.com,
Rick Edgecombe <rick.p.edgecombe@...el.com>
Subject: [PATCH 1/2] vmalloc: New flag for flush before releasing pages
Since vfree will lazily flush the TLB, but not lazily free the underlying pages,
it often leaves stale TLB entries to freed pages that could get re-used. This is
undesirable for cases where the memory being freed has special permissions such
as executable.
Having callers flush the TLB after calling vfree still leaves a window where
the pages are freed, but the TLB entry remains. Also the entire operation can be
deferred if the vfree is called from an interrupt and so a TLB flush after
calling vfree would miss the entire operation. So in order to support this use
case, a new flag VM_IMMEDIATE_UNMAP is added, that will cause the free operation
to take place like this:
1. Unmap
2. Flush TLB/Unmap aliases
3. Free pages
In the deferred case these steps are all done by the work queue.
This implementation derives from two sketches from Dave Hansen and
Andy Lutomirski.
Suggested-by: Dave Hansen <dave.hansen@...el.com>
Suggested-by: Andy Lutomirski <luto@...nel.org>
Suggested-by: Will Deacon <will.deacon@....com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
include/linux/vmalloc.h | 1 +
mm/vmalloc.c | 13 +++++++++++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 398e9c95cd61..cca6b6b83cf0 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -21,6 +21,7 @@ struct notifier_block; /* in notifier.h */
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */
#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
+#define VM_IMMEDIATE_UNMAP 0x00000200 /* flush before releasing pages */
/* bits [20..32] reserved for arch specific ioremap internals */
/*
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 97d4b25d0373..68766651b5a7 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1516,6 +1516,14 @@ static void __vunmap(const void *addr, int deallocate_pages)
debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
remove_vm_area(addr);
+
+ /*
+ * Need to flush the TLB before freeing pages in the case of this flag.
+ * As long as that's happening, unmap aliases.
+ */
+ if (area->flags & VM_IMMEDIATE_UNMAP)
+ vm_unmap_aliases();
+
if (deallocate_pages) {
int i;
@@ -1925,8 +1933,9 @@ EXPORT_SYMBOL(vzalloc_node);
void *vmalloc_exec(unsigned long size)
{
- return __vmalloc_node(size, 1, GFP_KERNEL, PAGE_KERNEL_EXEC,
- NUMA_NO_NODE, __builtin_return_address(0));
+ return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END,
+ GFP_KERNEL, PAGE_KERNEL_EXEC, VM_IMMEDIATE_UNMAP,
+ NUMA_NO_NODE, __builtin_return_address(0));
}
#if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32)
--
2.17.1
Powered by blists - more mailing lists