[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200826120740.GP22869@dhcp22.suse.cz>
Date: Wed, 26 Aug 2020 14:07:40 +0200
From: Michal Hocko <mhocko@...e.com>
To: xunlei <xlpang@...ux.alibaba.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] mm: memcg: Fix memcg reclaim soft lockup
On Wed 26-08-20 20:00:47, xunlei wrote:
> On 2020/8/26 下午7:00, Michal Hocko wrote:
> > On Wed 26-08-20 18:41:18, xunlei wrote:
> >> On 2020/8/26 下午4:11, Michal Hocko wrote:
> >>> On Wed 26-08-20 15:27:02, Xunlei Pang wrote:
> >>>> We've met softlockup with "CONFIG_PREEMPT_NONE=y", when
> >>>> the target memcg doesn't have any reclaimable memory.
> >>>
> >>> Do you have any scenario when this happens or is this some sort of a
> >>> test case?
> >>
> >> It can happen on tiny guest scenarios.
> >
> > OK, you made me more curious. If this is a tiny guest and this is a hard
> > limit reclaim path then we should trigger an oom killer which should
> > kill the offender and that in turn bail out from the try_charge lopp
> > (see should_force_charge). So how come this repeats enough in your setup
> > that it causes soft lockups?
> >
>
> should_force_charge() is false, the current trapped in endless loop is
> not the oom victim.
How is that possible? If the oom killer kills a task and that doesn't
resolve the oom situation then it would go after another one until all
tasks are killed. Or is your task living outside of the memcg it tries
to charge?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists