[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190124190142.124996479@linuxfoundation.org>
Date: Thu, 24 Jan 2019 20:19:31 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Christoph Lameter <cl@...ux.com>,
syzbot+d6ed4ec679652b4fd4e4@...kaller.appspotmail.com,
Andrew Morton <akpm@...ux-foundation.org>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: [PATCH 3.18 07/52] slab: alien caches must not be initialized if the allocation of the alien cache failed
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Christoph Lameter <cl@...ux.com>
commit 09c2e76ed734a1d36470d257a778aaba28e86531 upstream.
Callers of __alloc_alien() check for NULL. We must do the same check in
__alloc_alien_cache to avoid NULL pointer dereferences on allocation
failures.
Link: http://lkml.kernel.org/r/010001680f42f192-82b4e12e-1565-4ee0-ae1f-1e98974906aa-000000@email.amazonses.com
Fixes: 49dfc304ba241 ("slab: use the lock on alien_cache, instead of the lock on array_cache")
Fixes: c8522a3a5832b ("Slab: introduce alloc_alien")
Signed-off-by: Christoph Lameter <cl@...ux.com>
Reported-by: syzbot+d6ed4ec679652b4fd4e4@...kaller.appspotmail.com
Reviewed-by: Andrew Morton <akpm@...ux-foundation.org>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: <stable@...r.kernel.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
mm/slab.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -869,8 +869,10 @@ static struct alien_cache *__alloc_alien
struct alien_cache *alc = NULL;
alc = kmalloc_node(memsize, gfp, node);
- init_arraycache(&alc->ac, entries, batch);
- spin_lock_init(&alc->lock);
+ if (alc) {
+ init_arraycache(&alc->ac, entries, batch);
+ spin_lock_init(&alc->lock);
+ }
return alc;
}
Powered by blists - more mailing lists