[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <97581e50e594596e0bf8dd5bb3598d5e13013f18.1612538932.git.andreyknvl@google.com>
Date: Fri, 5 Feb 2021 16:39:04 +0100
From: Andrey Konovalov <andreyknvl@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>
Cc: Will Deacon <will.deacon@....com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Peter Collingbourne <pcc@...gle.com>,
Evgenii Stepanov <eugenis@...gle.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Kevin Brodsky <kevin.brodsky@....com>,
kasan-dev@...glegroups.com, linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH v2 03/12] kasan: optimize large kmalloc poisoning
Similarly to kasan_kmalloc(), kasan_kmalloc_large() doesn't need
to unpoison the object as it as already unpoisoned by alloc_pages()
(or by ksize() for krealloc()).
This patch changes kasan_kmalloc_large() to only poison the redzone.
Reviewed-by: Marco Elver <elver@...gle.com>
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
---
mm/kasan/common.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 00edbc3eb32e..f2a6bae13053 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -494,7 +494,6 @@ EXPORT_SYMBOL(__kasan_kmalloc);
void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
gfp_t flags)
{
- struct page *page;
unsigned long redzone_start;
unsigned long redzone_end;
@@ -504,12 +503,23 @@ void * __must_check __kasan_kmalloc_large(const void *ptr, size_t size,
if (unlikely(ptr == NULL))
return NULL;
- page = virt_to_page(ptr);
+ /*
+ * The object has already been unpoisoned by kasan_alloc_pages() for
+ * alloc_pages() or by ksize() for krealloc().
+ */
+
+ /*
+ * The redzone has byte-level precision for the generic mode.
+ * Partially poison the last object granule to cover the unaligned
+ * part of the redzone.
+ */
+ if (IS_ENABLED(CONFIG_KASAN_GENERIC))
+ kasan_poison_last_granule(ptr, size);
+
+ /* Poison the aligned part of the redzone. */
redzone_start = round_up((unsigned long)(ptr + size),
KASAN_GRANULE_SIZE);
- redzone_end = (unsigned long)ptr + page_size(page);
-
- kasan_unpoison(ptr, size);
+ redzone_end = (unsigned long)ptr + page_size(virt_to_page(ptr));
kasan_poison((void *)redzone_start, redzone_end - redzone_start,
KASAN_PAGE_REDZONE);
--
2.30.0.365.g02bc693789-goog
Powered by blists - more mailing lists