[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160331083336.GA27831@dhcp22.suse.cz>
Date: Thu, 31 Mar 2016 10:33:36 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>,
"David S. Miller" <davem@...emloft.net>,
Tony Luck <tony.luck@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Chris Zankel <chris@...kel.net>,
Max Filippov <jcmvbkbc@...il.com>, x86@...nel.org,
linux-alpha@...r.kernel.org, linux-ia64@...r.kernel.org,
linux-s390@...r.kernel.org, linux-sh@...r.kernel.org,
sparclinux@...r.kernel.org, linux-xtensa@...ux-xtensa.org,
linux-arch@...r.kernel.org
Subject: Re: [PATCH 03/11] locking, rwsem: introduce basis for
down_write_killable
On Wed 30-03-16 15:25:49, Peter Zijlstra wrote:
[...]
> Why is the signal_pending_state() test _after_ the call to schedule()
> and before the 'trylock'.
No special reason. I guess I was just too focused on the wake_by_signal
path and didn't realize the trylock as well.
> __mutex_lock_common() has it before the call to schedule and after the
> 'trylock'.
>
> The difference is that rwsem will now respond to the KILL and return
> -EINTR even if the lock is available, whereas mutex will acquire it and
> ignore the signal (for a little while longer).
>
> Neither is wrong per se, but I feel all the locking primitives should
> behave in a consistent manner in this regard.
Agreed! What about the following on top? I will repost the full patch
if it looks OK.
Thanks!
---
diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
index d1d04ca10d0e..fb2db7b408f0 100644
--- a/kernel/locking/rwsem-spinlock.c
+++ b/kernel/locking/rwsem-spinlock.c
@@ -216,14 +216,13 @@ int __sched __down_write_state(struct rw_semaphore *sem, int state)
*/
if (sem->count == 0)
break;
- set_task_state(tsk, state);
- raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
- schedule();
if (signal_pending_state(state, current)) {
ret = -EINTR;
- raw_spin_lock_irqsave(&sem->wait_lock, flags);
goto out;
}
+ set_task_state(tsk, state);
+ raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+ schedule();
raw_spin_lock_irqsave(&sem->wait_lock, flags);
}
/* got the lock */
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 5cec34f1ad6f..781b2628e41b 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -487,19 +487,19 @@ __rwsem_down_write_failed_state(struct rw_semaphore *sem, int state)
/* Block until there are no active lockers. */
do {
- schedule();
if (signal_pending_state(state, current)) {
raw_spin_lock_irq(&sem->wait_lock);
ret = ERR_PTR(-EINTR);
goto out;
}
+ schedule();
set_current_state(state);
} while ((count = sem->count) & RWSEM_ACTIVE_MASK);
raw_spin_lock_irq(&sem->wait_lock);
}
- __set_current_state(TASK_RUNNING);
out:
+ __set_current_state(TASK_RUNNING);
list_del(&waiter.list);
raw_spin_unlock_irq(&sem->wait_lock);
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists