[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090313195712.GA18439@us.ibm.com>
Date: Fri, 13 Mar 2009 14:57:13 -0500
From: "Serge E. Hallyn" <serue@...ibm.com>
To: Li Zefan <lizf@...fujitsu.com>
Cc: Linux Containers <containers@...ts.linux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Paul Menage <menage@...gle.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] devcgroup: avoid using cgroup_lock
Quoting Serge E. Hallyn (serue@...ibm.com):
> Quoting Li Zefan (lizf@...fujitsu.com):
> > >> @@ -426,11 +431,11 @@ static int devcgroup_access_write(struct cgroup *cgrp, struct cftype *cft,
> > >> const char *buffer)
> > >> {
> > >> int retval;
> > >> - if (!cgroup_lock_live_group(cgrp))
> > >
> > > Does it matter that we no longer check for cgroup_is_removed()?
> > >
> >
> > No, this means in a rare case that the write handler is called when the cgroup
> > is dead, we still do the update work instead of returning ENODEV.
> >
> > This is ok, since at that time, accessing cgroup and devcgroup is still valid,
> > but will have no effect since there is no task in this cgroup and the cgroup
> > will be destroyed soon.
>
> Ok, just wanted to make sure the devcgroup couldn't be partially torn
> down and risking NULL or freed-memory derefs...
Ok, so the cgroup's files will be deleted first, then on the directory
removal the cgroup's data (each whitelist entry) is deleted. So we can
let that ordering (by cgroup_clear_directory) ensure that nothing inside
a file write can happen while the destroy handler is called, right?
(That's why I was worried about not using the cgroup_lock: we need some
way of synchronizing those. But I guess we're fine)
> BTW is that against linux-next? (didn't seem to apply cleanly against
> my 2.6.29-rc9) I guess I'd like to do a little test before acking,
> though it looks ok based on your answer.
Acked-by: Serge Hallyn <serue@...ibm.com>
-serge
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists