[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121031212359.GC5286@dhcp22.suse.cz>
Date: Wed, 31 Oct 2012 22:23:59 +0100
From: Michal Hocko <mhocko@...e.cz>
To: Tejun Heo <tj@...nel.org>
Cc: lizefan@...wei.com, hannes@...xchg.org, bsingharora@...il.com,
kamezawa.hiroyu@...fujitsu.com,
containers@...ts.linux-foundation.org, cgroups@...r.kernel.org,
linux-kernel@...r.kernel.org, Glauber Costa <glommer@...allels.com>
Subject: Re: [PATCH 4/8] cgroup: deactivate CSS's and mark cgroup dead before
invoking ->pre_destroy()
On Wed 31-10-12 12:44:06, Tejun Heo wrote:
[...]
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index f22e3cd..66204a6 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
[...]
> @@ -4122,13 +4079,30 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
> }
> prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
>
> - /* block new css_tryget() by deactivating refcnt */
> + /*
> + * Block new css_tryget() by deactivating refcnt and mark @cgrp
> + * removed. This makes future css_tryget() and child creation
> + * attempts fail thus maintaining the removal conditions verified
> + * above.
> + */
> for_each_subsys(cgrp->root, ss) {
> struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
>
> WARN_ON(atomic_read(&css->refcnt) < 0);
> atomic_add(CSS_DEACT_BIAS, &css->refcnt);
> }
> + set_bit(CGRP_REMOVED, &cgrp->flags);
> +
> + /*
> + * Tell subsystems to initate destruction. pre_destroy() should be
> + * called with cgroup_mutex unlocked. See 3fa59dfbc3 ("cgroup: fix
> + * potential deadlock in pre_destroy") for details.
> + */
> + mutex_unlock(&cgroup_mutex);
> + for_each_subsys(cgrp->root, ss)
> + if (ss->pre_destroy)
> + WARN_ON_ONCE(ss->pre_destroy(cgrp));
Do you think that BUG_ON would be too harsh?
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists