[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110908021141.GB22545@unix33.andrew.cmu.edu>
Date: Wed, 7 Sep 2011 22:11:42 -0400
From: Ben Blum <bblum@...rew.cmu.edu>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Ben Blum <bblum@...rew.cmu.edu>, NeilBrown <neilb@...e.de>,
paulmck@...ux.vnet.ibm.com, Paul Menage <paul@...lmenage.org>,
Li Zefan <lizf@...fujitsu.com>,
containers@...ts.linux-foundation.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Frederic Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH][BUGFIX] cgroups: more safe tasklist locking in
cgroup_attach_proc
On Fri, Sep 02, 2011 at 02:32:51PM +0200, Oleg Nesterov wrote:
> On 09/01, Ben Blum wrote:
> >
> > On Mon, Aug 15, 2011 at 08:49:57PM +0200, Oleg Nesterov wrote:
> > >
> > > But can't we avoid the global list? thread_group_leader() or not, we do
> > > not really care. We only need to ensure we can safely find all threads.
> > >
> > > How about the patch below?
> >
> > I was content with the tasklist_lock because cgroup_attach_proc is
> > already a pretty heavyweight operation, and probably pretty rare that a
> > user would want to do multiple of them at once quickly.
>
> Perhaps. But this is the global lock, no matter what. Why do you prefer
> to take it instead of per-process ->siglock?
Basically just because I wanted to find the simplest race-free
implementation - at that point I was just shooting to have a complete
set of patches, and didn't judge the global lock to be bad enough to
think about replacing. I could yet be convinced that siglock is better.
>
> > I asked Andrew
> > to take the simple tasklist_lock patch just now, since it does fix the
> > bug at least.
>
> I disagree.
>
> > Anyway, looking at this, hmm. I am not sure if this protects adequately?
> > In de_thread, the sighand lock is held only around the first half
> > (around zap_other_threads),
>
> We only need this lock to find all threads,
>
> > and not around the following section where
> > leadership is transferred (esp. around the list_replace calls).
> > tasklist_lock is held here, though, so it seems like the right lock to
> > hold.
>
> This doesn't matter at all, I think. The code under while_each_thread()
> doesn't need the stable ->group_leader, and it can be changed right after
> you drop tasklist. list_replace() calls do not play with ->thread_group
> list.
>
> How can tasklist make any difference?
Oops. I misread the list_replace calls as being on ->thread_group,
instead of on ->tasks as they actually are. Hm, and I see __exit_signal
takes the sighand lock around the call to __unhash_process.
OK, I believe it will work. Oh, and I guess the sighand lock also
subsumes the group_leader check that you were objecting to in the other
thread.
Sorry for the delay. Any way this change goes through, though, the
commenting should be clear on how de_thread interacts with it. Something
like:
/* If the leader exits, its links on the thread_group list become
* invalid. One way this can happen is if a sub-thread does exec() when
* de_thread() calls release_task(leader) (and leader->sighand gets set
* to NULL, in which case lock_task_sighand will fail). Since in that
* case the threadgroup is still around, cgroup_procs_write should try
* again (finding the new leader), which EAGAIN indicates here. This is
* "double-double-toil-and-trouble-check locking". */
(I rather like the phrase "double-double-..." to describe the overall
approach, and would prefer if it stayed in the comment.)
>
> > > With or without this/your patch this leader can die right after we
> > > drop the lock. ss->can_attach(leader) and ss->attach(leader) look
> > > suspicious. If a sub-thread execs, this task_struct has nothing to
> > > do with the threadgroup.
> >
> > hmm. I thought I had this case covered, but it's been so long since I
> > actually wrote the code that if I did I can't remember how.
>
> This all doesn't look right... Hopefully addressed by Tejun patches.
>
Will look into those separately.
Thanks,
Ben
>
> Oleg.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists