[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180124042920.GA571@zzz.localdomain>
Date: Tue, 23 Jan 2018 20:29:20 -0800
From: Eric Biggers <ebiggers3@...il.com>
To: Sasha Levin <Alexander.Levin@...rosoft.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Jiang Biao <jiang.biao2@....com.cn>,
Al Viro <viro@...iv.linux.org.uk>,
Minchan Kim <minchan@...nel.org>,
Michal Hocko <mhocko@...nel.org>,
"zhong.weidong@....com.cn" <zhong.weidong@....com.cn>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH AUTOSEL for 4.9 38/55] fs/mbcache.c: make count_objects()
more robust
On Wed, Jan 24, 2018 at 04:15:49AM +0000, Sasha Levin wrote:
> From: Jiang Biao <jiang.biao2@....com.cn>
>
> [ Upstream commit d5dabd633922ac5ee5bcc67748f7defb8b211469 ]
>
> When running ltp stress test for 7*24 hours, vmscan occasionally emits
> the following warning continuously:
>
> mb_cache_scan+0x0/0x3f0 negative objects to delete
> nr=-9232265467809300450
> ...
>
> Tracing shows the freeable(mb_cache_count returns) is -1, which causes
> the continuous accumulation and overflow of total_scan.
>
> This patch makes sure that mb_cache_count() cannot return a negative
> value, which makes the mbcache shrinker more robust.
>
> Link: http://lkml.kernel.org/r/1511753419-52328-1-git-send-email-jiang.biao2@zte.com.cn
> Signed-off-by: Jiang Biao <jiang.biao2@....com.cn>
> Cc: Al Viro <viro@...iv.linux.org.uk>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Michal Hocko <mhocko@...nel.org>
> Cc: <zhong.weidong@....com.cn>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
> Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
> ---
> fs/mbcache.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/fs/mbcache.c b/fs/mbcache.c
> index c5bd19ffa326..35ab4187bfe1 100644
> --- a/fs/mbcache.c
> +++ b/fs/mbcache.c
> @@ -269,6 +269,9 @@ static unsigned long mb_cache_count(struct shrinker *shrink,
> struct mb_cache *cache = container_of(shrink, struct mb_cache,
> c_shrink);
>
> + /* Unlikely, but not impossible */
> + if (unlikely(cache->c_entry_count < 0))
> + return 0;
> return cache->c_entry_count;
> }
This patch is broken and is reverted in linux-next, via ext4/dev:
bbe45d2460da ("mbcache: revert "fs/mbcache.c: make count_objects() more robust"")
Can you please update your "autosel" script/process/whatever to not select
commits that are reverted?
Powered by blists - more mailing lists