[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2f86c2480908041214r1f23c1b7q9a25b04e26c92a1a@mail.gmail.com>
Date: Tue, 4 Aug 2009 15:14:59 -0400
From: Benjamin Blum <bblum@...gle.com>
To: Paul Menage <menage@...gle.com>
Cc: "Serge E. Hallyn" <serue@...ibm.com>,
containers@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [PATCH 6/6] Makes procs file writable to move all threads by tgid
at once
On Tue, Aug 4, 2009 at 2:48 PM, Paul Menage<menage@...gle.com> wrote:
> On Mon, Aug 3, 2009 at 12:45 PM, Serge E. Hallyn<serue@...ibm.com> wrote:
>>
>> This is probably a stupid idea, but... what about having zero
>> overhead at clone(), and instead, at cgroup_task_migrate(),
>> dequeue_task()ing all of the affected threads for the duration of
>> the migrate?
>>
>
> Or a simpler alternative - rather than taking the thread group
> leader's rwsem in cgroup_fork(), always take current's rwsem. Then
> you're always locking a (probably?) local rwsem and minimizing the
> overhead. So not quite zero overhead in the fork path, but I'd be
> surprised if it was measurable. In cgroup_attach_proc() you then have
> to take the rwsem of every thread in the process. Kind of the
> equivalent of a per-threadgroup big-reader lock.
Hmm, the tasklist_lock section in fork() is entirely inside the
read-lock. Presumably then iterating the threadgroup list to take all
rwsems is safe from a race in which one thread escapes?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists