lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 27 May 2016 19:14:00 +0200
From:	Alexander Potapenko <glider@...gle.com>
To:	adech.fo@...il.com, cl@...ux.com, dvyukov@...gle.com,
	akpm@...ux-foundation.org, rostedt@...dmis.org,
	iamjoonsoo.kim@....com, js1304@...il.com, kcc@...gle.com,
	aryabinin@...tuozzo.com
Cc:	kasan-dev@...glegroups.com, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH v1] [mm] Set page->slab_cache for every page allocated for a kmem_cache.

It's reasonable to rely on the fact that for every page allocated for a
kmem_cache the |slab_cache| field points to that cache. Without that it's
hard to figure out which cache does an allocated object belong to.

Fixes: 55834c59098d0c5a97b0f324 ("mm: kasan: initial memory quarantine
implementation")
Signed-off-by: Alexander Potapenko <glider@...gle.com>
---
 mm/slab.c | 7 ++++++-
 mm/slub.c | 8 +++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1..ac6c251 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2703,8 +2703,13 @@ static void slab_put_obj(struct kmem_cache *cachep,
 static void slab_map_pages(struct kmem_cache *cache, struct page *page,
 			   void *freelist)
 {
-	page->slab_cache = cache;
+	int i, nr_pages;
+	char *start = page_address(page);
+
 	page->freelist = freelist;
+	nr_pages = (1 << cache->gfporder);
+	for (i = 0; i < nr_pages; i++)
+		virt_to_page(start + PAGE_SIZE * i)->slab_cache = cache;
 }
 
 /*
diff --git a/mm/slub.c b/mm/slub.c
index 825ff45..fc75ddb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1411,7 +1411,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	struct kmem_cache_order_objects oo = s->oo;
 	gfp_t alloc_gfp;
 	void *start, *p;
-	int idx, order;
+	int idx, order, i, pages;
 
 	flags &= gfp_allowed_mask;
 
@@ -1442,9 +1442,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 		stat(s, ORDER_FALLBACK);
 	}
 
+	pages = 1 << oo_order(oo);
 	if (kmemcheck_enabled &&
 	    !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
-		int pages = 1 << oo_order(oo);
 
 		kmemcheck_alloc_shadow(page, oo_order(oo), alloc_gfp, node);
 
@@ -1461,13 +1461,15 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	page->objects = oo_objects(oo);
 
 	order = compound_order(page);
-	page->slab_cache = s;
 	__SetPageSlab(page);
 	if (page_is_pfmemalloc(page))
 		SetPageSlabPfmemalloc(page);
 
 	start = page_address(page);
 
+	for (i = 0; i < pages; i++)
+		virt_to_page(start + PAGE_SIZE * i)->slab_cache = s;
+
 	if (unlikely(s->flags & SLAB_POISON))
 		memset(start, POISON_INUSE, PAGE_SIZE << order);
 
-- 
2.8.0.rc3.226.g39d4020

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ