[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190313191506.159677-19-sashal@kernel.org>
Date: Wed, 13 Mar 2019 15:14:52 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Andrey Konovalov <andreyknvl@...gle.com>,
Alexander Potapenko <glider@...gle.com>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Catalin Marinas <catalin.marinas@....com>,
Christoph Lameter <cl@...ux.com>,
David Rientjes <rientjes@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Evgeniy Stepanov <eugenis@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Kostya Serebryany <kcc@...gle.com>,
Pekka Enberg <penberg@...nel.org>, Qian Cai <cai@....pw>,
Vincenzo Frascino <vincenzo.frascino@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <sashal@...nel.org>, linux-mm@...ck.org
Subject: [PATCH AUTOSEL 4.14 19/33] kasan, slub: move kasan_poison_slab hook before page_address
From: Andrey Konovalov <andreyknvl@...gle.com>
[ Upstream commit a71012242837fe5e67d8c999cfc357174ed5dba0 ]
With tag based KASAN page_address() looks at the page flags to see whether
the resulting pointer needs to have a tag set. Since we don't want to set
a tag when page_address() is called on SLAB pages, we call
page_kasan_tag_reset() in kasan_poison_slab(). However in allocate_slab()
page_address() is called before kasan_poison_slab(). Fix it by changing
the order.
[andreyknvl@...gle.com: fix compilation error when CONFIG_SLUB_DEBUG=n]
Link: http://lkml.kernel.org/r/ac27cc0bbaeb414ed77bcd6671a877cf3546d56e.1550066133.git.andreyknvl@google.com
Link: http://lkml.kernel.org/r/cd895d627465a3f1c712647072d17f10883be2a1.1549921721.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <andreyknvl@...gle.com>
Cc: Alexander Potapenko <glider@...gle.com>
Cc: Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc: Catalin Marinas <catalin.marinas@....com>
Cc: Christoph Lameter <cl@...ux.com>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Evgeniy Stepanov <eugenis@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Kostya Serebryany <kcc@...gle.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: Qian Cai <cai@....pw>
Cc: Vincenzo Frascino <vincenzo.frascino@....com>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
mm/slub.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index 220d42e592ef..f14ef59c9e57 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1087,6 +1087,16 @@ static void setup_object_debug(struct kmem_cache *s, struct page *page,
init_tracking(s, object);
}
+static void setup_page_debug(struct kmem_cache *s, void *addr, int order)
+{
+ if (!(s->flags & SLAB_POISON))
+ return;
+
+ metadata_access_enable();
+ memset(addr, POISON_INUSE, PAGE_SIZE << order);
+ metadata_access_disable();
+}
+
static inline int alloc_consistency_checks(struct kmem_cache *s,
struct page *page,
void *object, unsigned long addr)
@@ -1304,6 +1314,8 @@ unsigned long kmem_cache_flags(unsigned long object_size,
#else /* !CONFIG_SLUB_DEBUG */
static inline void setup_object_debug(struct kmem_cache *s,
struct page *page, void *object) {}
+static inline void setup_page_debug(struct kmem_cache *s,
+ void *addr, int order) {}
static inline int alloc_debug_processing(struct kmem_cache *s,
struct page *page, void *object, unsigned long addr) { return 0; }
@@ -1599,12 +1611,11 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
if (page_is_pfmemalloc(page))
SetPageSlabPfmemalloc(page);
- start = page_address(page);
+ kasan_poison_slab(page);
- if (unlikely(s->flags & SLAB_POISON))
- memset(start, POISON_INUSE, PAGE_SIZE << order);
+ start = page_address(page);
- kasan_poison_slab(page);
+ setup_page_debug(s, start, order);
shuffle = shuffle_freelist(s, page);
--
2.19.1
Powered by blists - more mailing lists