lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 12 Feb 2014 15:06:26 -0800 (PST)
From:	Hugh Dickins <hughd@...gle.com>
To:	Tejun Heo <tj@...nel.org>, Michal Hocko <mhocko@...e.cz>
cc:	Johannes Weiner <hannes@...xchg.org>,
	Filipe Brandenburger <filbranden@...gle.com>,
	Li Zefan <lizefan@...wei.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Greg Thelen <gthelen@...gle.com>,
	Michel Lespinasse <walken@...gle.com>,
	Markus Blank-Burian <burian@...nster.de>,
	Shawn Bohrer <shawn.bohrer@...il.com>, cgroups@...r.kernel.org,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: [PATCH 2/2] cgroup: bring back kill_cnt to order css destruction

Sometimes the cleanup after memcg hierarchy testing gets stuck in
mem_cgroup_reparent_charges(), unable to bring non-kmem usage down to 0.

There may turn out to be several causes, but a major cause is this: the
workitem to offline parent can get run before workitem to offline child;
parent's mem_cgroup_reparent_charges() circles around waiting for the
child's pages to be reparented to its lrus, but it's holding cgroup_mutex
which prevents the child from reaching its mem_cgroup_reparent_charges().

Further testing showed that an ordered workqueue for cgroup_destroy_wq
is not always good enough: percpu_ref_kill_and_confirm's call_rcu_sched
stage on the way can mess up the order before reaching the workqueue.

Instead bring back v3.11's css kill_cnt, repurposing it to make sure
that offline_css() is not called for parent before it has been called
for all children.

Fixes: e5fca243abae ("cgroup: use a dedicated workqueue for cgroup destruction")
Signed-off-by: Hugh Dickins <hughd@...gle.com>
Reviewed-by: Filipe Brandenburger <filbranden@...gle.com>
Cc: stable@...r.kernel.org # v3.10+ (but will need extra care)
---
This is an alternative to Filipe's 1/2: there's no need for both,
but each has its merits.  I prefer Filipe's, which is much easier to
understand: this one made more sense in v3.11, when it was just a matter
of extending the use of css_kill_cnt; but might be preferred if offlining
children before parent is thought to be a good idea generally.

 include/linux/cgroup.h |    3 +++
 kernel/cgroup.c        |   21 +++++++++++++++++++++
 2 files changed, 24 insertions(+)

--- 3.14-rc2/include/linux/cgroup.h	2014-02-02 18:49:07.033302094 -0800
+++ linux/include/linux/cgroup.h	2014-02-11 15:59:22.720393186 -0800
@@ -79,6 +79,9 @@ struct cgroup_subsys_state {
 
 	unsigned long flags;
 
+	/* ensure children are offlined before parent */
+	atomic_t kill_cnt;
+
 	/* percpu_ref killing and RCU release */
 	struct rcu_head rcu_head;
 	struct work_struct destroy_work;
--- 3.14-rc2/kernel/cgroup.c	2014-02-02 18:49:07.737302111 -0800
+++ linux/kernel/cgroup.c	2014-02-11 15:57:56.000391125 -0800
@@ -175,6 +175,7 @@ static int need_forkexit_callback __read
 
 static struct cftype cgroup_base_files[];
 
+static void css_killed_ref_fn(struct percpu_ref *ref);
 static void cgroup_destroy_css_killed(struct cgroup *cgrp);
 static int cgroup_destroy_locked(struct cgroup *cgrp);
 static int cgroup_addrm_files(struct cgroup *cgrp, struct cftype cfts[],
@@ -4043,6 +4044,7 @@ static void init_css(struct cgroup_subsy
 	css->cgroup = cgrp;
 	css->ss = ss;
 	css->flags = 0;
+	atomic_set(&css->kill_cnt, 1);
 
 	if (cgrp->parent)
 		css->parent = cgroup_css(cgrp->parent, ss);
@@ -4292,6 +4294,7 @@ static void css_killed_work_fn(struct wo
 {
 	struct cgroup_subsys_state *css =
 		container_of(work, struct cgroup_subsys_state, destroy_work);
+	struct cgroup_subsys_state *parent = css->parent;
 	struct cgroup *cgrp = css->cgroup;
 
 	mutex_lock(&cgroup_mutex);
@@ -4320,6 +4323,12 @@ static void css_killed_work_fn(struct wo
 	 * destruction happens only after all css's are released.
 	 */
 	css_put(css);
+
+	/*
+	 * Put the parent's kill_cnt reference from kill_css(), and
+	 * schedule its ->css_offline() if all children are now offline.
+	 */
+	css_killed_ref_fn(&parent->refcnt);
 }
 
 /* css kill confirmation processing requires process context, bounce */
@@ -4328,6 +4337,9 @@ static void css_killed_ref_fn(struct per
 	struct cgroup_subsys_state *css =
 		container_of(ref, struct cgroup_subsys_state, refcnt);
 
+	if (!atomic_dec_and_test(&css->kill_cnt))
+		return;
+
 	INIT_WORK(&css->destroy_work, css_killed_work_fn);
 	queue_work(cgroup_destroy_wq, &css->destroy_work);
 }
@@ -4362,6 +4374,15 @@ static void kill_css(struct cgroup_subsy
 	 * css is confirmed to be seen as killed on all CPUs.
 	 */
 	percpu_ref_kill_and_confirm(&css->refcnt, css_killed_ref_fn);
+
+	/*
+	 * Make sure that ->css_offline() will not be called for parent
+	 * before it has been called for all children: this ordering
+	 * requirement is important for memcg, where parent's offline
+	 * might wait for a child's, leading to deadlock.
+	 */
+	atomic_inc(&css->parent->kill_cnt);
+	css_killed_ref_fn(&css->refcnt);
 }
 
 /**
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ