[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9dd7b7a8225a90019f74eb303b1f269d85628e94.camel@cisco.com>
Date: Mon, 21 Sep 2020 16:15:51 +0000
From: "Julius Hemanth Pitti (jpitti)" <jpitti@...co.com>
To: "greg@...ah.com" <greg@...ah.com>
CC: "vdavydov.dev@...il.com" <vdavydov.dev@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"xlpang@...ux.alibaba.com" <xlpang@...ux.alibaba.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"hannes@...xchg.org" <hannes@...xchg.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"xe-linux-external(mailer list)" <xe-linux-external@...co.com>,
"mhocko@...e.com" <mhocko@...e.com>,
"ktkhai@...tuozzo.com" <ktkhai@...tuozzo.com>
Subject: Re: [PATCH stable v5.8] mm: memcg: fix memcg reclaim soft lockup
On Mon, 2020-09-21 at 18:12 +0200, Greg KH wrote:
> On Thu, Sep 17, 2020 at 06:19:13PM -0700, Julius Hemanth Pitti wrote:
> > From: Xunlei Pang <xlpang@...ux.alibaba.com>
> >
> > commit e3336cab2579012b1e72b5265adf98e2d6e244ad upstream.
> >
> > We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target
> > memcg
> > doesn't have any reclaimable memory.
> >
> > It can be easily reproduced as below:
> >
> > watchdog: BUG: soft lockup - CPU#0 stuck for
> > 111s![memcg_test:2204]
> > CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12
> > Call Trace:
> > shrink_lruvec+0x49f/0x640
> > shrink_node+0x2a6/0x6f0
> > do_try_to_free_pages+0xe9/0x3e0
> > try_to_free_mem_cgroup_pages+0xef/0x1f0
> > try_charge+0x2c1/0x750
> > mem_cgroup_charge+0xd7/0x240
> > __add_to_page_cache_locked+0x2fd/0x370
> > add_to_page_cache_lru+0x4a/0xc0
> > pagecache_get_page+0x10b/0x2f0
> > filemap_fault+0x661/0xad0
> > ext4_filemap_fault+0x2c/0x40
> > __do_fault+0x4d/0xf9
> > handle_mm_fault+0x1080/0x1790
> >
> > It only happens on our 1-vcpu instances, because there's no chance
> > for
> > oom reaper to run to reclaim the to-be-killed process.
> >
> > Add a cond_resched() at the upper shrink_node_memcgs() to solve
> > this
> > issue, this will mean that we will get a scheduling point for each
> > memcg
> > in the reclaimed hierarchy without any dependency on the
> > reclaimable
> > memory in that memcg thus making it more predictable.
> >
> > Suggested-by: Michal Hocko <mhocko@...e.com>
> > Signed-off-by: Xunlei Pang <xlpang@...ux.alibaba.com>
> > Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> > Acked-by: Chris Down <chris@...isdown.name>
> > Acked-by: Michal Hocko <mhocko@...e.com>
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
> > Link:
> > http://lkml.kernel.org/r/1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com
> > Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
> > Fixes: b0dedc49a2da ("mm/vmscan.c: iterate only over charged
> > shrinkers during memcg shrink_slab()")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Julius Hemanth Pitti <jpitti@...co.com>
> > ---
> > mm/vmscan.c | 8 ++++++++
> > 1 file changed, 8 insertions(+)
>
> The Fixes: tag you show here goes back to 4.19, can you provide a
> 4.19.y
> and 5.4.y version of this as well?
Sure. Will send for both 5.4.y and 4.19.y.
>
> thanks,
>
> greg k-h
Powered by blists - more mailing lists