lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140207140402.GA3304@htj.dyndns.org>
Date:	Fri, 7 Feb 2014 09:04:02 -0500
From:	Tejun Heo <tj@...nel.org>
To:	Hugh Dickins <hughd@...gle.com>
Cc:	Filipe Brandenburger <filbranden@...gle.com>,
	Li Zefan <lizefan@...wei.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michal Hocko <mhocko@...e.cz>,
	Johannes Weiner <hannes@...xchg.org>,
	Greg Thelen <gthelen@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Markus Blank-Burian <burian@...nster.de>,
	Shawn Bohrer <shawn.bohrer@...il.com>, cgroups@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] cgroup: use an ordered workqueue for cgroup destruction

Hello, Hugh.

On Thu, Feb 06, 2014 at 03:56:01PM -0800, Hugh Dickins wrote:
> Sometimes the cleanup after memcg hierarchy testing gets stuck in
> mem_cgroup_reparent_charges(), unable to bring non-kmem usage down to 0.
> 
> There may turn out to be several causes, but a major cause is this: the
> workitem to offline parent can get run before workitem to offline child;
> parent's mem_cgroup_reparent_charges() circles around waiting for the
> child's pages to be reparented to its lrus, but it's holding cgroup_mutex
> which prevents the child from reaching its mem_cgroup_reparent_charges().
> 
> Just use an ordered workqueue for cgroup_destroy_wq.

Hmmm... I'm not really comfortable with this.  This would seal shut
any possiblity of increasing concurrency in that path, which is okay
now but I find the combination of such long term commitment and the
non-obviousness (it's not apparent from looking at memcg code why it
wouldn't deadlock) very unappealing.  Besides, the only reason
offline() is currently called under cgroup_mutex is history.  We can
move it out of cgroup_mutex right now.

But even with offline being called outside cgroup_mutex, IIRC, the
described problem would still be able to deadlock as long as the tree
depth is deeper than max concurrency level of the destruction
workqueue.  Sure, we can give it large enough number but it's
generally nasty.

One thing I don't get is why memcg has such reverse dependency at all.
Why does the parent wait for its descendants to do something during
offline?  Shouldn't it be able to just bail and let whatever
descendant which is stil busy propagate things upwards?  That's a
usual pattern we use to tree shutdowns anyway.  Would that be nasty to
implement in memcg?

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ