[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191113170823.GA12464@castle.DHCP.thefacebook.com>
Date: Wed, 13 Nov 2019 17:08:29 +0000
From: Roman Gushchin <guro@...com>
To: Michal Koutný <mkoutny@...e.com>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>,
"Johannes Weiner" <hannes@...xchg.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH 1/2] mm: memcg: switch to css_tryget() in
get_mem_cgroup_from_mm()
On Wed, Nov 13, 2019 at 05:29:34PM +0100, Michal Koutný wrote:
> Hi.
>
> On Wed, Nov 06, 2019 at 02:51:30PM -0800, Roman Gushchin <guro@...com> wrote:
> > Let's fix it by switching from css_tryget_online() to css_tryget().
> Is this a safe thing to do? The stack captures a kmem charge path, with
> css_tryget() it may happen it gets an offlined memcg and carry out
> charge into it. What happens when e.g. memcg_deactivate_kmem_caches is
> skipped as a consequence?
The thing here is that css_tryget_online() cannot pin the online state,
so even if returned true, the cgroup can be offline at the return from
the function. So if we rely somewhere on it, it's already broken.
Generally speaking, it's better to reduce it's usage to the bare minimum.
>
> > The problem is caused by an exiting task which is associated with
> > an offline memcg. We're iterating over and over in the
> > do {} while (!css_tryget_online()) loop, but obviously the memcg won't
> > become online and the exiting task won't be migrated to a live memcg.
> As discussed in other replies, the task is not yet exiting. However, the
> access to memcg isn't through `current` but `mm->owner`, i.e. another
> task of a threadgroup may have got stuck in an offlined memcg (I don't
> have a good explanation for that though).
Yes, it's true, and I've no idea how the memcg can be offline in this case too.
We've seen it only several times in fb production, so it seems to be a really
rare case. Could be anything from a tiny race somewhere to cpu bugs.
Thanks!
Powered by blists - more mailing lists