[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <0b79da9e534bfa35d11154b940095df23ee68a16.1638308023.git.andreyknvl@google.com>
Date: Tue, 30 Nov 2021 23:08:01 +0100
From: andrey.konovalov@...ux.dev
To: Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Catalin Marinas <catalin.marinas@....com>,
Peter Collingbourne <pcc@...gle.com>
Cc: Andrey Konovalov <andreyknvl@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
kasan-dev@...glegroups.com,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Will Deacon <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org,
Evgenii Stepanov <eugenis@...gle.com>,
linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH 25/31] kasan, vmalloc: don't unpoison VM_ALLOC pages before mapping
From: Andrey Konovalov <andreyknvl@...gle.com>
This patch makes KASAN unpoison vmalloc mappings after that have been
mapped in when it's possible: for vmalloc() (indentified via VM_ALLOC)
and vm_map_ram().
The reasons for this are:
- For vmalloc() and vm_map_ram(): pages don't get unpoisoned in case
mapping them fails.
- For vmalloc(): HW_TAGS KASAN needs pages to be mapped to set tags via
kasan_unpoison_vmalloc().
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
---
mm/vmalloc.c | 26 ++++++++++++++++++++++----
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f37d0ed99bf9..82ef1e27e2e4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2208,14 +2208,15 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
mem = (void *)addr;
}
- mem = kasan_unpoison_vmalloc(mem, size);
-
if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
pages, PAGE_SHIFT) < 0) {
vm_unmap_ram(mem, count);
return NULL;
}
+ /* Mark the pages as accessible after they were mapped in. */
+ mem = kasan_unpoison_vmalloc(mem, size);
+
return mem;
}
EXPORT_SYMBOL(vm_map_ram);
@@ -2443,7 +2444,14 @@ static struct vm_struct *__get_vm_area_node(unsigned long size,
setup_vmalloc_vm(area, va, flags, caller);
- area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
+ /*
+ * For VM_ALLOC mappings, __vmalloc_node_range() mark the pages as
+ * accessible after they are mapped in.
+ * Otherwise, as the pages can be mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
+ if (!(flags & VM_ALLOC))
+ area->addr = kasan_unpoison_vmalloc(area->addr, requested_size);
return area;
}
@@ -3072,6 +3080,12 @@ void *__vmalloc_node_range(unsigned long size, unsigned long align,
if (!addr)
goto fail;
+ /*
+ * Mark the pages for VM_ALLOC mappings as accessible after they were
+ * mapped in.
+ */
+ addr = kasan_unpoison_vmalloc(addr, real_size);
+
/*
* In this function, newly allocated vm_struct has VM_UNINITIALIZED
* flag. It means that vm_struct is not fully initialized.
@@ -3766,7 +3780,11 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
}
spin_unlock(&vmap_area_lock);
- /* mark allocated areas as accessible */
+ /*
+ * Mark allocated areas as accessible.
+ * As the pages are mapped outside of vmalloc code,
+ * mark them now as a best-effort approach.
+ */
for (area = 0; area < nr_vms; area++)
vms[area]->addr = kasan_unpoison_vmalloc(vms[area]->addr,
vms[area]->size);
--
2.25.1
Powered by blists - more mailing lists