[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20060927161847.GB1291@us.ibm.com>
Date: Wed, 27 Sep 2006 09:18:47 -0700
From: "Paul E. McKenney" <paulmck@...ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: "Eric W. Biederman" <ebiederm@...ssion.com>,
Bill Huey <billh@...ppy.monkey.org>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
John Stultz <johnstul@...ibm.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH] move put_task_struct() reaping into a thread [Re: 2.6.18-rt1]
On Wed, Sep 27, 2006 at 04:06:59PM +0200, Ingo Molnar wrote:
>
> * Eric W. Biederman <ebiederm@...ssion.com> wrote:
>
> > I'm still wondering if we can move put_task_struct a little lower in
> > the logic in the places where it is called, so it isn't called under a
> > lock, or with preemption disabled. The only downside I see is that it
> > might convolute the logic into unreadability.
>
> well it's all a function of the task reaping logic: right now we in
> essence complete the reaping from the scheduler, via prev_state ==
> TASK_DEAD. We cannot do it sooner because the task is still in use. I
> had one other implementation upstream some time ago, which was a
> single-slot cache for reaped tasks - but that uglified other codepaths
> because _something_ has to notice that the task has been unscheduled.
I believe that we are way too far into the task-teardown process for
something like synchronize_rcu() to be feasible (not enough of the
task left to be able to sleep!), but thought I should bring up the
possibility on the off-chance that it caused someone to come up with a
better approach.
Another possible approach would be workqueues. The disadvantages here are
(1) higher overhead (2) workqueues can be delayed for a -long- time in a
realtime environment, which increases vulnerability to memory exhaustion.
Again, hoping this provokes some better ideas...
Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists