lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 7 Feb 2014 12:20:44 -0800 (PST)
From:	Hugh Dickins <hughd@...gle.com>
To:	Tejun Heo <tj@...nel.org>
cc:	Hugh Dickins <hughd@...gle.com>,
	Filipe Brandenburger <filbranden@...gle.com>,
	Li Zefan <lizefan@...wei.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michal Hocko <mhocko@...e.cz>,
	Johannes Weiner <hannes@...xchg.org>,
	Greg Thelen <gthelen@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Markus Blank-Burian <burian@...nster.de>,
	Shawn Bohrer <shawn.bohrer@...il.com>, cgroups@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] cgroup: use an ordered workqueue for cgroup
 destruction

Hi Tejun,

On Fri, 7 Feb 2014, Tejun Heo wrote:
> On Thu, Feb 06, 2014 at 03:56:01PM -0800, Hugh Dickins wrote:
> > Sometimes the cleanup after memcg hierarchy testing gets stuck in
> > mem_cgroup_reparent_charges(), unable to bring non-kmem usage down to 0.
> > 
> > There may turn out to be several causes, but a major cause is this: the
> > workitem to offline parent can get run before workitem to offline child;
> > parent's mem_cgroup_reparent_charges() circles around waiting for the
> > child's pages to be reparented to its lrus, but it's holding cgroup_mutex
> > which prevents the child from reaching its mem_cgroup_reparent_charges().
> > 
> > Just use an ordered workqueue for cgroup_destroy_wq.
> 
> Hmmm... I'm not really comfortable with this.  This would seal shut
> any possiblity of increasing concurrency in that path, which is okay
> now but I find the combination of such long term commitment and the
> non-obviousness (it's not apparent from looking at memcg code why it
> wouldn't deadlock) very unappealing.  Besides, the only reason
> offline() is currently called under cgroup_mutex is history.  We can
> move it out of cgroup_mutex right now.

Thanks for taking the patch into your tree for now,
and thanks to Michal and Hannes for supporting it.

Yes, we're not sealing a door shut with this one-liner.  My first
reaction to the deadlock was indeed, what's the cgroup_mutex for here?
and I've seen enough deadlocks on cgroup_mutex (though most from this
issue, I now believe) to welcome the idea of reducing its blanket use.

But I think there are likely to be bumps along that road (just as
there have been along the workqueue-ification road), so this ordered
workqueue appears much the safer option for now.  Please rip it out
again when the cgroup_mutex is safely removed from this path.

(I've certainly written memcg code myself that "knows" it's already
serialized by cgroup_mutex at the outer level: I think code that
never reached anyone else's tree, but I'm not certain of that.)

> 
> But even with offline being called outside cgroup_mutex, IIRC, the
> described problem would still be able to deadlock as long as the tree
> depth is deeper than max concurrency level of the destruction
> workqueue.  Sure, we can give it large enough number but it's
> generally nasty.

You worry me there: I certainly don't want to be introducing new
deadlocks.  You understand workqueues much better than most of us: I'm
not sure what "max concurrency level of the destruction workqueue" is,
but it sounds uncomfortably like an ordered workqueue's max_active 1.

You don't return to this concern in the following mails of the thread:
did you later decide that it actually won't be a problem?  I'll assume
so for the moment, since you took the patch, but please reassure me.

> 
> One thing I don't get is why memcg has such reverse dependency at all.
> Why does the parent wait for its descendants to do something during
> offline?  Shouldn't it be able to just bail and let whatever
> descendant which is stil busy propagate things upwards?  That's a
> usual pattern we use to tree shutdowns anyway.  Would that be nasty to
> implement in memcg?

I've no idea how nasty it would be to change memcg around, but Michal
and Hannes appear very open to doing so.  I do think that memcg's current
expectation is very reasonable: it's perfectly normal that a rmdir cannot
succeed until the directory is empty, and to depend upon that fact; but
the use of workqueue made some things asynchronous which were not before,
which has led to some surprises.

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ