[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080702013927.GA2264@sgi.com>
Date: Tue, 1 Jul 2008 20:39:27 -0500
From: Dean Nelson <dcn@....com>
To: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Robin Holt <holt@....com>,
ksummit-2008-discuss@...ts.linux-foundation.org,
Linux Kernel list <linux-kernel@...r.kernel.org>
Subject: Re: Delayed interrupt work, thread pools
On Tue, Jul 01, 2008 at 08:02:40AM -0500, Robin Holt wrote:
> Adding Dean Nelson to this discussion. I don't think he actively
> follows lkml. We do something similar to this in xpc by managing our
> own pool of threads. I know he has talked about this type thing in the
> past.
>
> Thanks,
> Robin
>
>
> On Tue, Jul 01, 2008 at 10:45:35PM +1000, Benjamin Herrenschmidt wrote:
> >
> > For the specific SPU management issue we've been thinking about, we
> > could just implement an ad-hoc mechanism locally, but it occurs to me
> > that maybe this is a more generic problem and thus some kind of
> > extension to workqueues would be a good idea here.
> >
> > Any comments ?
As Robin, mentioned XPC manages a pool of kthreads that can (for performance
reasons) be quickly awakened by an interrupt handler and that are able to
block for indefinite periods of time.
In drivers/misc/sgi-xp/xpc_main.c you'll find a rather simplistic attempt
at maintaining this pool of kthreads.
The kthreads are activated by calling xpc_activate_kthreads(). Either idle
kthreads are awakened or new kthreads are created if a sufficent number of
idle kthreads are not available.
Once finished with current 'work' a kthread waits for new work by calling
wait_event_interruptible_exclusive(). (The call is found in
xpc_kthread_waitmsgs().)
The number of idle kthreads is limited as is the total number of kthreads
allowed to exist concurrently.
It's certainly not optimal in the way it maintains the number of kthreads
in the pool over time, but I've not had the time to spare to make it better.
I'd love it if a general mechanism were provided so that XPC could get out
of maintaining its own pool.
Thanks,
Dean
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists