[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1271804453.10448.168.camel@gandalf.stny.rr.com>
Date: Tue, 20 Apr 2010 19:00:53 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: Primiano Tucci <p.tucci@...il.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, tglx <tglx@...utronix.de>
Subject: Re: Considerations on sched APIs under RT patch
On Tue, 2010-04-20 at 23:56 +0200, Primiano Tucci wrote:
> Hi Peter,
> long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) {
> cpumask_var_t cpus_allowed, new_mask;
> struct task_struct *p;
> int retval;
>
> get_online_cpus();
> --> read_lock(&tasklist_lock);
>
>
> My question is: suppose that tasklist_lock is held by a writer.
> What happens to the calling thread? It can't take the lock, therefore
> it yields to the next ready task (that in my scenario has a lower
> priority).
> In my view, this is not a Priority Inversion problem. The problem is
> that the sched_setaffinity is unexpectedly "suspensive" and yields to
> the lower thread.
read_locks are converted into "special" rt_mutexes. The only thing
special about them, is the owner may grab the same read lock more than
once (recursive).
If a lower priority process currently holds the tasklist_lock for write,
when a high priority process tries to take it for read (or write for
that matter) it will block on the lower priority process. But that lower
priority process will acquire the priority of the higher priority
process (priority inheritance) and will run at that priority until it
releases the lock. Then it will go back to its low priority and the
higher priority process will then preempt it and acquire the lock for
read.
The above is what is expected.
-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists