lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <1407995870-4268-1-git-send-email-davidlohr@hp.com> Date: Wed, 13 Aug 2014 22:57:50 -0700 From: Davidlohr Bueso <davidlohr@...com> To: peterz@...radead.org, mingo@...nel.org Cc: jason.low2@...com, Waiman.Long@...com, davidlohr@...com, scott.norton@...com, aswin@...com, linux-kernel@...r.kernel.org Subject: [PATCH -tip] locking/mutexes: Avoid bogus wakeups after lock stealing Mutexes lock-stealing functionality allows another task to skip its turn in the wait-queue and atomically acquire the lock. This is fine and a nice optimization, however, when releasing the mutex, we always wakeup the next task in FIFO order. When the lock has been stolen, this leads to wasting waking up a task just to immediately realize it cannot acquire the lock and just go back to sleep. While in practice this window is quite small, it is not about performance or avoid taking the wait_lock, but because avoiding bogus wakeups is the right thing to do. In order to deal with the race when potentially missing the unlock slowpath (details in the comments), we pessimistically set the lock to have waiters. The downside of this is that the task that now stole the lock would always have to acquire the mutex in its slowpath (as mutex_try_to_acquire() would never succeed. However, since this path is rarely called, the cost is really never noticed. Signed-off-by: Davidlohr Bueso <davidlohr@...com> --- Original thread: https://lkml.org/lkml/2014/8/8/37 kernel/locking/mutex.c | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index dadbf88..4570611 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -383,12 +383,26 @@ done: return false; } + +static inline bool mutex_has_owner(struct mutex *lock) +{ + struct task_struct *owner = ACCESS_ONCE(lock->owner); + + return owner != NULL; +} + #else + static bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) { return false; } + +static inline bool mutex_has_owner(struct mutex *lock) +{ + return false; +} #endif __visible __used noinline @@ -715,6 +729,35 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested) { unsigned long flags; +/* + * Skipping the mutex_has_owner() check when DEBUG, allows us to + * avoid taking the wait_lock in order to do not call mutex_release() + * and debug_mutex_unlock() when !DEBUG. This can otherwise result in + * hung it a hung task when another one enters the lock's slowpath in + * mutex_lock(). + */ +#ifndef CONFIG_DEBUG_MUTEXES + /* + * Abort the wakeup operation if there is an another mutex owner, as the + * lock was stolen. mutex_unlock() should have cleared the owner field + * before calling this function. If that field is now set, another task + * must have acquired the mutex. Note that this is a very tiny window. + */ + if (unlikely(mutex_has_owner(lock))) { + /* + * Unconditionally set the lock to have waiters. Otherwise + * we can race with another task that grabbed the mutex via + * optimistic spinning and sets the lock to 0. When done, + * the unlock logic never reaches the slowpath, thus never + * waking the next task in the queue. + * Furthermore, this is safe as we've already acknowledged + * the fact that the lock was stolen and now a new owner + * exists. + */ + atomic_set(&lock->count, -1); + return; + } +#endif /* * As a performance measurement, release the lock before doing other * wakeup related duties to follow. This allows other tasks to acquire -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists