[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1538303301-61784-1-git-send-email-zhongjiang@huawei.com>
Date: Sun, 30 Sep 2018 18:28:21 +0800
From: zhong jiang <zhongjiang@...wei.com>
To: <gregkh@...ux-foundation.org>
CC: <cl@...ux.com>, <penberg@...nel.org>, <rientjes@...gle.com>,
<iamjoonsoo.kim@....com>, <akpm@...ux-foundation.org>,
<mhocko@...nel.org>, <mgorman@...e.de>, <vbabka@...e.cz>,
<andrea@...nel.org>, <kirill@...temov.name>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: [STABLE PATCH] slub: make ->cpu_partial unsigned int
From: Alexey Dobriyan <adobriyan@...il.com>
[ Upstream commit e5d9998f3e09359b372a037a6ac55ba235d95d57 ]
/*
* cpu_partial determined the maximum number of objects
* kept in the per cpu partial lists of a processor.
*/
Can't be negative.
I hit a real issue that it will result in a large number of memory leak.
Becuase Freeing slabs are in interrupt context. So it can trigger this issue.
put_cpu_partial can be interrupted more than once.
due to a union struct of lru and pobjects in struct page, when other core handles
page->lru list, for eaxmple, remove_partial in freeing slab code flow, It will
result in pobjects being a negative value(0xdead0000). Therefore, a large number
of slabs will be added to per_cpu partial list.
I had posted the issue to community before. The detailed issue description is as follows.
https://www.spinics.net/lists/kernel/msg2870979.html
After applying the patch, The issue is fixed. So the patch is a effective bugfix.
It should go into stable.
Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com
Signed-off-by: Alexey Dobriyan <adobriyan@...il.com>
Acked-by: Christoph Lameter <cl@...ux.com>
Cc: Pekka Enberg <penberg@...nel.org>
Cc: David Rientjes <rientjes@...gle.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: <stable@...r.kernel.org> # 4.4.x
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: zhong jiang <zhongjiang@...wei.com>
---
include/linux/slub_def.h | 3 ++-
mm/slub.c | 6 +++---
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 3388511..9b681f2 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -67,7 +67,8 @@ struct kmem_cache {
int size; /* The size of an object including meta data */
int object_size; /* The size of an object without meta data */
int offset; /* Free pointer offset. */
- int cpu_partial; /* Number of per cpu partial objects to keep around */
+ /* Number of per cpu partial objects to keep around */
+ unsigned int cpu_partial;
struct kmem_cache_order_objects oo;
/* Allocation and freeing of slabs */
diff --git a/mm/slub.c b/mm/slub.c
index 2284c43..c33b0e1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1661,7 +1661,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
{
struct page *page, *page2;
void *object = NULL;
- int available = 0;
+ unsigned int available = 0;
int objects;
/*
@@ -4674,10 +4674,10 @@ static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf)
static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf,
size_t length)
{
- unsigned long objects;
+ unsigned int objects;
int err;
- err = kstrtoul(buf, 10, &objects);
+ err = kstrtouint(buf, 10, &objects);
if (err)
return err;
if (objects && !kmem_cache_has_cpu_partial(s))
--
1.7.12.4
Powered by blists - more mailing lists