[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180124041521.32223-38-alexander.levin@microsoft.com>
Date: Wed, 24 Jan 2018 04:15:49 +0000
From: Sasha Levin <Alexander.Levin@...rosoft.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
CC: Jiang Biao <jiang.biao2@....com.cn>,
Al Viro <viro@...iv.linux.org.uk>,
Minchan Kim <minchan@...nel.org>,
Michal Hocko <mhocko@...nel.org>,
"zhong.weidong@....com.cn" <zhong.weidong@....com.cn>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <Alexander.Levin@...rosoft.com>
Subject: [PATCH AUTOSEL for 4.9 38/55] fs/mbcache.c: make count_objects() more
robust
From: Jiang Biao <jiang.biao2@....com.cn>
[ Upstream commit d5dabd633922ac5ee5bcc67748f7defb8b211469 ]
When running ltp stress test for 7*24 hours, vmscan occasionally emits
the following warning continuously:
mb_cache_scan+0x0/0x3f0 negative objects to delete
nr=-9232265467809300450
...
Tracing shows the freeable(mb_cache_count returns) is -1, which causes
the continuous accumulation and overflow of total_scan.
This patch makes sure that mb_cache_count() cannot return a negative
value, which makes the mbcache shrinker more robust.
Link: http://lkml.kernel.org/r/1511753419-52328-1-git-send-email-jiang.biao2@zte.com.cn
Signed-off-by: Jiang Biao <jiang.biao2@....com.cn>
Cc: Al Viro <viro@...iv.linux.org.uk>
Cc: Minchan Kim <minchan@...nel.org>
Cc: Michal Hocko <mhocko@...nel.org>
Cc: <zhong.weidong@....com.cn>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
---
fs/mbcache.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/mbcache.c b/fs/mbcache.c
index c5bd19ffa326..35ab4187bfe1 100644
--- a/fs/mbcache.c
+++ b/fs/mbcache.c
@@ -269,6 +269,9 @@ static unsigned long mb_cache_count(struct shrinker *shrink,
struct mb_cache *cache = container_of(shrink, struct mb_cache,
c_shrink);
+ /* Unlikely, but not impossible */
+ if (unlikely(cache->c_entry_count < 0))
+ return 0;
return cache->c_entry_count;
}
--
2.11.0
Powered by blists - more mailing lists