[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111012180521.GA20715@ghc17.ghc.andrew.cmu.edu>
Date: Wed, 12 Oct 2011 14:05:21 -0400
From: Ben Blum <bblum@...rew.cmu.edu>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Tejun Heo <htejun@...il.com>, rjw@...k.pl, paul@...lmenage.org,
lizf@...fujitsu.com, linux-pm@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org,
containers@...ts.linux-foundation.org, fweisbec@...il.com,
matthltc@...ibm.com, akpm@...ux-foundation.org,
Paul Menage <menage@...gle.com>,
Ben Blum <bblum@...rew.cmu.edu>
Subject: Re: [PATCH 3/4] threadgroup: extend threadgroup_lock() to cover
exit and exec
On Wed, Oct 12, 2011 at 07:51:04PM +0200, Oleg Nesterov wrote:
> Hi,
>
> On 10/10, Tejun Heo wrote:
> >
> > Hope you can still remember some
> > of this one. :)
>
> I am not sure ;)
>
> > On Sun, Sep 18, 2011 at 07:37:23PM +0200, Oleg Nesterov wrote:
> > > > With this change, threadgroup_lock() guarantees that the target
> > > > threadgroup will remain stable - no new task will be added, no new
> > > > PF_EXITING will be set and exec won't happen.
> > >
> > > To me, this is the only "contradictory" change,
> >
> > What do you mean "contradictory"? Can you please elaborate?
>
> Because, iirc, with this patch do_exit() does (almost) everything
> under rw_sem. OK, down_read() should be cheap, but still.
>
> See also below.
>
> > > > + /*
> > > > + * Release threadgroup and make sure we are holding no locks.
> > > > + */
> > > > + threadgroup_change_done(tsk);
> > >
> > > I am wondering, can't we narrow the scope of threadgroup_change_begin/done
> > > in do_exit() path?
> > >
> > > The code after 4/4 still has to check PF_EXITING, this is correct. And yes,
> > > with this patch PF_EXITING becomes stable under ->group_rwsem. But, it seems,
> > > we do not really need this?
> > >
> > > I mean, can't we change cgroup_exit() to do threadgroup_change_begin/done
> > > instead? We do not really care about PF_EXITING, we only need to ensure that
> > > we can't race with cgroup_exit(), right?
> >
> > If we confine our usage to cgroup, excluding just against
> > cgroup_exit() might work although this is still a bit nasty. ie. some
> > callbacks might not expect half torn-down tasks in methods other than
> > the exit callback.
>
> Oh, sorry, I don't understand... I already forgot the details.
>
> > Also, it makes the mechanism unnecessarily cgroup-specific without
> > gaining much if anything.
>
> Yes! And _personally_ I think it should be cgroup-specific, that is
> why I dislike the very fact do_exit() uses it directly. To me it would
> be cleaner to shift it into cgroup hooks. Yes, sure, this is subjective.
In the fork path, threadgroup_fork_read_...() is also called directly,
not through cgroups. Would that change too?
>
> In fact I still hope we can kill this sem altogether, but so far I have
> no idea how we can do this. We do need the new per-process lock to
> protect (in particular) ->thread_group. It is quite possible that it
> should be rw_semaphore. But in this case we down_write(), not _read
> in exit/fork paths, and its scope should be small.
I'm confused - taking a big rwsem for writing in the fork/exit paths?
The point here is that even though fork/exit modify thread_group, they
are logical "readers" while cgroups is a "writer", since cgroups needs a
stable view that excludes all fork/exit, but fork/exit can go together.
For clarity: In the fork path it is not meant to protect thread_group;
it is meant to protect the window between cgroup_fork() and
cgroup_post_fork().
>
> I do not think the current lock should have more users. Of course I
> can be wrong. And what exactly it protects? I mean copy_process().
> Almost everything, but this simply connects to cgroup fork hooks.
>
> Just my opinion, I am not going to insist.
>
> Oleg.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists