[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180412191322.GA21205@bombadil.infradead.org>
Date: Thu, 12 Apr 2018 12:13:22 -0700
From: Matthew Wilcox <willy@...radead.org>
To: Christopher Lameter <cl@...ux.com>
Cc: linux-mm@...ck.org, Matthew Wilcox <mawilcox@...rosoft.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Jan Kara <jack@...e.cz>,
Jeff Layton <jlayton@...hat.com>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH v3 2/2] slab: __GFP_ZERO is incompatible with a constructor
From: Matthew Wilcox <mawilcox@...rosoft.com>
__GFP_ZERO requests that the object be initialised to all-zeroes,
while the purpose of a constructor is to initialise an object to a
particular pattern. We cannot do both. Add a warning to catch any
users who mistakenly pass a __GFP_ZERO flag when allocating a slab with
a constructor.
Fixes: d07dbea46405 ("Slab allocators: support __GFP_ZERO in all allocators")
Signed-off-by: Matthew Wilcox <mawilcox@...rosoft.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Vlastimil Babka <vbabka@...e.cz>
Acked-by: Michal Hocko <mhocko@...e.com>
---
mm/slab.c | 2 ++
mm/slob.c | 4 +++-
mm/slub.c | 2 ++
3 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index 58c8cecc26ab..aca63d49b270 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2661,6 +2661,7 @@ static struct page *cache_grow_begin(struct kmem_cache *cachep,
invalid_mask, &invalid_mask, flags, &flags);
dump_stack();
}
+ WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
local_flags = flags & (GFP_CONSTRAINT_MASK|GFP_RECLAIM_MASK);
check_irq_off();
@@ -3067,6 +3068,7 @@ static inline void cache_alloc_debugcheck_before(struct kmem_cache *cachep,
static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep,
gfp_t flags, void *objp, unsigned long caller)
{
+ WARN_ON_ONCE(cachep->ctor && (flags & __GFP_ZERO));
if (!objp)
return objp;
if (cachep->flags & SLAB_POISON) {
diff --git a/mm/slob.c b/mm/slob.c
index 1a46181b675c..958173fd7c24 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -556,8 +556,10 @@ static void *slob_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
flags, node);
}
- if (b && c->ctor)
+ if (b && c->ctor) {
+ WARN_ON_ONCE(flags & __GFP_ZERO);
c->ctor(b);
+ }
kmemleak_alloc_recursive(b, c->size, 1, c->flags, flags);
return b;
diff --git a/mm/slub.c b/mm/slub.c
index a28488643603..0487d316a665 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2434,6 +2434,8 @@ static inline void *new_slab_objects(struct kmem_cache *s, gfp_t flags,
struct kmem_cache_cpu *c = *pc;
struct page *page;
+ WARN_ON_ONCE(s->ctor && (flags & __GFP_ZERO));
+
freelist = get_partial(s, flags, node, c);
if (freelist)
--
2.16.3
Powered by blists - more mailing lists