[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <k2nc5b2c05b1004201456j846b15a0oaa261d16740b5e33@mail.gmail.com>
Date: Tue, 20 Apr 2010 23:56:45 +0200
From: Primiano Tucci <p.tucci@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, tglx <tglx@...utronix.de>,
rostedt <rostedt@...dmis.org>
Subject: Re: Considerations on sched APIs under RT patch
Hi Peter,
thank you for your reply.
On Tue, Apr 20, 2010 at 11:20 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, 2010-04-19 at 22:48 +0200, Primiano Tucci wrote:
>
>> Yesterday days I found a strange behavior of the scheduler API's using
>> the RT patch, in particular the pthread_setaffinity_np (that stands on
>> sched_setaffinity).
>
>> I think the main problem is that sched_setaffinity makes use of a
>> rwlock, but rwlocks are pre-emptible with the RT patch.
>
> It does? where?
>
> sys_sched_setaffinity()
> sched_setaffinity()
> set_cpus_allowed_ptr()
I see
long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) {
cpumask_var_t cpus_allowed, new_mask;
struct task_struct *p;
int retval;
get_online_cpus();
--> read_lock(&tasklist_lock);
My question is: suppose that tasklist_lock is held by a writer.
What happens to the calling thread? It can't take the lock, therefore
it yields to the next ready task (that in my scenario has a lower
priority).
In my view, this is not a Priority Inversion problem. The problem is
that the sched_setaffinity is unexpectedly "suspensive" and yields to
the lower thread.
Thank you for your support,
Primiano
>
> set_cpus_allowed_ptr() is the one that does the real work, and that
> takes the rq->lock and plays games with the migration thread, non of
> which should be able to cause any form of priority inversion.
>
>> So it could happen that an high priority process/thread that makes use
>> of the sched_setaffinity facility could be unwillingly preempted when
>> controlling other (even low-priority) processes/threads.
>
> Well, suppose there was a rwlock_t, then for PREEMPT_RT=y that would be
> mapped to an rt_mutex, which is PI aware.
>
>> I think sched_setaffinity should make use of raw_spinlocks, or should
>> anyway be guaranteed to not be pre-empted (maybe a preempt_disable?),
>> otherwise could lead in unwanted situations for a Real Time OS, such
>> the one described below.
>
> It does, rq->lock is a non preemptible lock, and the migration thread
> runs at a priority higher than FIFO-99.
>
>> The issue can be easily reproduced taking inspiration from this scenario:
>>
>> I have 4 Real Time Threads (SCHED_FIFO) distributed as follows:
>>
>> T0 : CPU 0, Priority 2 (HIGH)
>> T1 : CPU 1, Priority 2 (HIGH)
>> T3 : CPU 0, Priority 1 (LOW)
>> T4 : CPU 1, Priority 1 (LOW)
>>
>> So T0 and T1 are actually the "big bosses" on CPUs #0 and #1, T3 and
>> T4, instead, never execute (let's assume that each thread is a simple
>> busy wait that never sleeps/yields) Now, at a certain point, from T0
>> code, I want to migrate T4 from CPU #1 to #0, keeping its low
>> priority.
>> Therefore I perform a pthread_setaffinity_np from T0 changing T4 mask
>> from CPU #1 to #0.
>>
>> In this scenario it happens that T3 (that should never execute since
>> there is T0 with higher priority currently running on the same CPU #0)
>> "emerge" and executes for a bit.
>> It seems that the pthread_setaffinity_np syscall is somehow
>> "suspensive" for the time needed to migrate T4 and let the scheduler
>> to execute T3 for that bunch of time.
>>
>> What do you think about this situation? Should sched APIs be revised?
>
> Not sure why you thinking the APIs should be changed. If this does
> indeed happen then there is a bug somewhere in the implementation, the
> trick will be finding it.
>
> So you run these four RT tasks on CPUs 0,1 and then control them from
> another cpu, say 2?
>
> Can you get a function trace that illustrates T3 getting scheduled,
> preferably while running the latest -rt kernel?
>
>
--
Primiano Tucci
http://www.primianotucci.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists