[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5188B272.3010207@intel.com>
Date: Tue, 07 May 2013 15:51:14 +0800
From: Alex Shi <alex.shi@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: mingo@...hat.com, tglx@...utronix.de, akpm@...ux-foundation.org,
arjan@...ux.intel.com, bp@...en8.de, pjt@...gle.com,
namhyung@...nel.org, efault@....de, morten.rasmussen@....com,
vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, len.brown@...el.com,
rafael.j.wysocki@...el.com, jkosina@...e.cz,
clark.williams@...il.com, tony.luck@...el.com,
keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [PATCH v4 0/6] sched: use runnable load based balance
On 05/03/2013 03:55 PM, Alex Shi wrote:
>
>> That should probably look like:
>>
>> preempt_disable();
>> raw_spin_unlock_irq();
>> preempt_enable_no_resched();
>> schedule();
>>
>> Otherwise you might find a performance regression on PREEMPT=y kernels.
>
> Yes, right!
> Thanks a lot for reminder. The following patch will fix it.
>>
Peter, would you like to pick this patch?
>
> ---
>
> From 4c9b4b8a9b92bcbe6934637fd33c617e73dbda97 Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@...el.com>
> Date: Fri, 3 May 2013 14:51:25 +0800
> Subject: [PATCH 8/8] rwsem: small optimizing rwsem_down_failed_common
>
> Peter Zijlstra suggest adding a preempt_enable_no_resched() to prevent
> a unnecessary scheduler in raw_spin_unlock.
> And we also can pack 2 raw_spin_lock to save one. So has this patch.
>
> Thanks Peter!
>
> Signed-off-by: Alex Shi <alex.shi@...el.com>
> ---
> lib/rwsem.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/lib/rwsem.c b/lib/rwsem.c
> index ad5e0df..9aacf81 100644
> --- a/lib/rwsem.c
> +++ b/lib/rwsem.c
> @@ -212,23 +212,25 @@ rwsem_down_failed_common(struct rw_semaphore *sem,
> adjustment == -RWSEM_ACTIVE_WRITE_BIAS)
> sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED);
>
> - raw_spin_unlock_irq(&sem->wait_lock);
> -
> /* wait to be given the lock */
> for (;;) {
> - if (!waiter.task)
> + if (!waiter.task) {
> + raw_spin_unlock_irq(&sem->wait_lock);
> break;
> + }
>
> - raw_spin_lock_irq(&sem->wait_lock);
> - /* Try to get the writer sem, may steal from the head writer: */
> + /* Try to get the writer sem, may steal from the head writer */
> if (flags == RWSEM_WAITING_FOR_WRITE)
> if (try_get_writer_sem(sem, &waiter)) {
> raw_spin_unlock_irq(&sem->wait_lock);
> return sem;
> }
> + preempt_disable();
> raw_spin_unlock_irq(&sem->wait_lock);
> + preempt_enable_no_resched();
> schedule();
> set_task_state(tsk, TASK_UNINTERRUPTIBLE);
> + raw_spin_lock_irq(&sem->wait_lock);
> }
>
> tsk->state = TASK_RUNNING;
>
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists