[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <b11824e1cb87c75c4def2b3ac592abb409cebf82.1605046662.git.andreyknvl@google.com>
Date: Tue, 10 Nov 2020 23:20:19 +0100
From: Andrey Konovalov <andreyknvl@...gle.com>
To: Dmitry Vyukov <dvyukov@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Evgenii Stepanov <eugenis@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Branislav Rankov <Branislav.Rankov@....com>,
Kevin Brodsky <kevin.brodsky@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
kasan-dev@...glegroups.com, linux-arm-kernel@...ts.infradead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Andrey Konovalov <andreyknvl@...gle.com>
Subject: [PATCH v2 15/20] kasan: don't round_up too much
For hardware tag-based mode kasan_poison_memory() already rounds up the
size. Do the same for software modes and remove round_up() from the common
code.
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
Reviewed-by: Dmitry Vyukov <dvyukov@...gle.com>
Link: https://linux-review.googlesource.com/id/Ib397128fac6eba874008662b4964d65352db4aa4
---
mm/kasan/common.c | 8 ++------
mm/kasan/shadow.c | 1 +
2 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 60793f8695a8..69ab880abacc 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -218,9 +218,7 @@ void __kasan_unpoison_object_data(struct kmem_cache *cache, void *object)
void __kasan_poison_object_data(struct kmem_cache *cache, void *object)
{
- kasan_poison_memory(object,
- round_up(cache->object_size, KASAN_GRANULE_SIZE),
- KASAN_KMALLOC_REDZONE);
+ kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_REDZONE);
}
/*
@@ -293,7 +291,6 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
{
u8 tag;
void *tagged_object;
- unsigned long rounded_up_size;
tag = get_tag(object);
tagged_object = object;
@@ -314,8 +311,7 @@ static bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
return true;
}
- rounded_up_size = round_up(cache->object_size, KASAN_GRANULE_SIZE);
- kasan_poison_memory(object, rounded_up_size, KASAN_KMALLOC_FREE);
+ kasan_poison_memory(object, cache->object_size, KASAN_KMALLOC_FREE);
if (!kasan_stack_collection_enabled())
return false;
diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 8e4fa9157a0b..3f64c9ecbcc0 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -82,6 +82,7 @@ void kasan_poison_memory(const void *address, size_t size, u8 value)
* addresses to this function.
*/
address = kasan_reset_tag(address);
+ size = round_up(size, KASAN_GRANULE_SIZE);
shadow_start = kasan_mem_to_shadow(address);
shadow_end = kasan_mem_to_shadow(address + size);
--
2.29.2.222.g5d2a92d10f8-goog
Powered by blists - more mailing lists