[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1247499843.8107.548.camel@Palantir>
Date: Mon, 13 Jul 2009 17:44:03 +0200
From: Raistlin <raistlin@...ux.it>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Douglas Niehaus <niehaus@...c.ku.edu>,
Henrik Austad <henrik@...tad.us>,
LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
Bill Huey <billh@...ppy.monkey.org>,
Linux RT <linux-rt-users@...r.kernel.org>,
Fabio Checconi <fabio@...dalf.sssup.it>,
"James H. Anderson" <anderson@...unc.edu>,
Thomas Gleixner <tglx@...utronix.de>,
Ted Baker <baker@...fsu.edu>,
Dhaval Giani <dhaval.giani@...il.com>,
Noah Watkins <jayhawk@....ucsc.edu>,
KUSP Google Group <kusp@...glegroups.com>,
Tommaso Cucinotta <cucinotta@...up.it>,
Giuseppe Lipari <lipari@...is.sssup.it>
Subject: Re: RFC for a new Scheduling policy/class in the Linux-kernel
On Sun, 2009-07-12 at 17:31 +0200, Peter Zijlstra wrote:
> But the thing that concerns me most, there seem to be a few O(n)
> consequences. Suppose that for each resource (or lock) R_i, there is a
> block graph G_i, which consists of n nodes and would be m deep.
>
> [...]
>
> However PEP on SMP needs to ensure all n tasks in G_i are on the same
> cpu, because otherwise we can end up wanting to execute the resource
> owner on multiple cpus at the same time, which is bad.
>
That's from where we are trying to start evaluating the various
possibilities to extend the idea to SMP.
What we would like to achieve is some set of rules that, extending the
UP ones, yield a situation which is both:
- analyzable from the real-time theorist's point of view... Which is
(un?)fortunately what we are :-)
- possible to implement... Which is not always (un!)fortunately obvious
here :-)
I hope you don't mind if I share some thoughts with you about the
solution I was wondering about... In case you do, just ignore and excuse
me.
Very basically: from the analysis point of view one easy and effective
solution would be to have the blocked-running tasks --i.e., the tasks
blocked on some lock that have been left on the rq to proxy-execute the
lock owner-- busy waiting while the lock owner is running. This allows
for retaining a lot of nice properties BWI already has, as far as
analyzability is concerned.
On the other hand, from the practical end efficiency point of view, it
would be not that difficult to block these otherwise-spinning tasks, in
order to avoid wasting CPU time too much... The only important thing is
to properly account the budget of the correct server/group (which
probably must be the otherwise-spinning task's one), or the analysis is
gone again! :-O
Also, this will probably imply some amount of block/wakeup overhead
which could be of some impact especially in involved, maybe rare, but
possible, locking schemes... which we would like to evaluate
thoroughly...
> This can of course be amortized, but you end up having to migrate the
> task (or an avatar thereof) to the owner's cpu (if you'd want to migrate
> the owner to the blocker's cpu, then you're quickly into trouble when
> there's multiple blockers), but any way around this ends up being O(n).
>
I agree... Computational complexity is also an issue, and something to
whom we have to validate against.
> Also, when the owner gets blocked on something that doesn't have an
> owner (io completion, or a traditional semaphore), you have to take all
> n tasks from the runqueue (and back again when they do become runnable).
>
> PIP doesn't suffer this, but does suffer the pain from having to
> reimplement the full schedule function on the waitqueues, which when you
> have hierarchical scheduling means you have to replicate the full
> hierarchy per waitqueue.
>
And, further than this, at least from my point of view, if you have
server/group based scheduling, and in general some kind of budgeting or
bandwidth enforcing mechanism in place, PIP is far from being a
solution...
> Furthermore we cannot assume locked sections are short, and we must
> indeed assume that its any resource in the kernel associated with any
> service which can be used by any thread, worse, it can be any odd
> userspace resource/thread too, since we expose the block graph to
> userspace processes through PI-futexes.
>
Agree, 100%. :-)
Again, sorry for bothering with if someone of you is not interested...
If instead you are, any comments is more than welcome.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
----------------------------------------------------------------------
Dario Faggioli, ReTiS Lab, Scuola Superiore Sant'Anna, Pisa (Italy)
http://blog.linux.it/raistlin / raistlin@...ga.net /
dario.faggioli@...ber.org
Download attachment "signature.asc" of type "application/pgp-signature" (198 bytes)
Powered by blists - more mailing lists