[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50F683D9.1080908@linux.vnet.ibm.com>
Date: Wed, 16 Jan 2013 16:11:29 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Ivo Sieben <meltedpianoman@...il.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
Andi Kleen <andi@...stfloor.org>,
Oleg Nesterov <oleg@...hat.com>, Jiri Slaby <jslaby@...e.cz>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
linux-serial@...r.kernel.org, Alan Cox <alan@...ux.intel.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [PATCH] tty: Only wakeup the line discipline idle queue when
queue is active
Hi Ivo,
On 01/16/2013 02:46 PM, Ivo Sieben wrote:
> Hi Preeti,
>
> 2013/1/16 Preeti U Murthy <preeti@...ux.vnet.ibm.com>:
>> Hi Ivo,
>> Can you explain how this problem could create a scheduler overhead?
>> I am a little confused, because as far as i know,scheduler does not come
>> in the picture of the wake up path right? select_task_rq() in
>> try_to_wake_up() is where the scheduler comes in,and this is after the
>> task wakes up.
>>
>
> Everytime the line discipline is dereferenced, the wakeup function is
> called. The wakeup() function contains a critical section protected by
> spinlocks. On a PREEMPT_RT system, a "normal" spinlock behaves just
> like a mutex: scheduling is not disabled and it is still possible that
> a new process on a higher RT priority is scheduled in. When a new -
> higher priority - process is scheduled in just when the put_ldisc() is
> in the critical section of the wakeup function, the higher priority
> process (that uses the same TTY instance) will finally also
> dereference the line discipline and try to wakeup the same waitqueue.
> This causes the high priority process to block on the same spinlock.
> Priority inheritance will solve this blocked situation by a context
> switch to the lower priority process, run until that process leaves
> the critical section, and a context switch back to the higher priority
> process. This is unnecessary since the waitqueue was empty after all
> (during normal operation the waitqueue is empty most of the time).
> This unnecessary context switch from/to the high priority process is
> what a mean with "scheduler overhead" (maybe not a good name for it,
> sorry for the confusion).
>
> Does this makes sense to you?
Yes.Thank you very much for the explanation :) But I dont see how the
context switching goes away with your patch.With your patch, when the
higher priority thread comes in when the lower priority thread is
running in the critical section,it will see the wait queue empty and
"continue its execution" without now wanting to enter the critical
section.So this means it will preempt the lower priority thread because
it is not waiting on a lock anyway.There is a context switch here right?
I dont see any problem in scheduling due to this,but I do think your
patch is essential.
The entire logic of
wakelist_active()
wake_up()
could be integrated into wake_up(). I dont understand why we need a
separate function to check the emptiness of the wake list. But as Oleg
pointed out we must identify the places for this optimization.
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists