[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071009211738.GC23388@monkey.ibm.com>
Date: Tue, 9 Oct 2007 14:17:38 -0700
From: mike kravetz <kravetz@...ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Gregory Haskins <ghaskins@...ell.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...e.hu>,
linux-rt-users <linux-rt-users@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, pmorreale@...ell.com,
sdietrich@...ell.com
Subject: Re: [RFC PATCH RT] push waiting rt tasks to cpus with lower prios.
On Tue, Oct 09, 2007 at 04:50:47PM -0400, Steven Rostedt wrote:
> > I did something like this a while ago for another scheduling project.
> > A couple 'possible' optimizations to think about are:
> > 1) Only scan the remote runqueues once and keep a local copy of the
> > remote priorities for subsequent 'scans'. Accessing the remote
> > runqueus (CPU specific cache lines) can be expensive.
>
> You mean to keep the copy for the next two tries?
Yes. But with #2 below, your next try is the runqueue/CPU that is the
next best candidate (after the trylock fails). The 'hope' is that there
is more than one candidate CPU to push the task to. Of course, you
always want to try and find the 'best' candidate. My thoughts were that
if you could find ANY cpu to take the task that would be better than
sending the IPI everywhere. With multiple runqueues/locks there is no
way you can be guaranteed of making the 'best' placement. So, a good
placement may be enough.
> > 2) When verifying priorities, just perform spin_trylock() on the remote
> > runqueue. If you can immediately get it great. If not, it implies
> > someone else is messing with the runqueue and there is a good chance
> > the data you pre-fetched (curr->Priority) is invalid. In this case
> > it might be faster to just 'move on' to the next candidate runqueue/CPU.
> > i.e. The next highest priority that the new task can preempt.
--
Mike
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists