lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1492092174-31734-3-git-send-email-alex.shi@linaro.org>
Date:   Thu, 13 Apr 2017 22:02:53 +0800
From:   Alex Shi <alex.shi@...aro.org>
To:     peterz@...radead.org, mingo@...hat.com, corbet@....net,
        linux-kernel@...r.kernel.org (open list:LOCKING PRIMITIVES)
Cc:     linux-kernel@...r.kernel.org, Alex Shi <alex.shi@...aro.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Sebastian Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>
Subject: [PATCH 2/3] rtmutex: deboost priority conditionally when rt-mutex unlock

The rt_mutex_fastunlock() will deboost 'current' task when it should be.
but the rt_mutex_slowunlock() function will set the 'deboost' flag
unconditionally. That cause some unnecessary priority adjustment.

'current' release this lock, so 'current' should be a higher prio
task than the next top waiter, unless the current prio was gotten
from this top waiter, iff so, we need to deboost 'current' after
the lock release.

Signed-off-by: Alex Shi <alex.shi@...aro.org>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Sebastian Siewior <bigeasy@...utronix.de>
To: linux-kernel@...r.kernel.org
To: Ingo Molnar <mingo@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
---
 kernel/locking/rtmutex.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 6edc32e..05ff685 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1037,10 +1037,11 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
  *
  * Called with lock->wait_lock held and interrupts disabled.
  */
-static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
+static bool mark_wakeup_next_waiter(struct wake_q_head *wake_q,
 				    struct rt_mutex *lock)
 {
 	struct rt_mutex_waiter *waiter;
+	bool deboost = false;
 
 	raw_spin_lock(&current->pi_lock);
 
@@ -1055,6 +1056,15 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
 	rt_mutex_dequeue_pi(current, waiter);
 
 	/*
+	 * 'current' release this lock, so 'current' should be a higher prio
+	 * task than the next top waiter, unless the current prio was gotten
+	 * from this top waiter, iff so, we need to deboost 'current' after
+	 * the lock release.
+	 */
+	if (current->prio == waiter->prio)
+		deboost = true;
+
+	/*
 	 * As we are waking up the top waiter, and the waiter stays
 	 * queued on the lock until it gets the lock, this lock
 	 * obviously has waiters. Just set the bit here and this has
@@ -1067,6 +1077,8 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
 	raw_spin_unlock(&current->pi_lock);
 
 	wake_q_add(wake_q, waiter->task);
+
+	return deboost;
 }
 
 /*
@@ -1336,6 +1348,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
 					struct wake_q_head *wake_q)
 {
 	unsigned long flags;
+	bool deboost = false;
 
 	/* irqsave required to support early boot calls */
 	raw_spin_lock_irqsave(&lock->wait_lock, flags);
@@ -1389,12 +1402,12 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
 	 *
 	 * Queue the next waiter for wakeup once we release the wait_lock.
 	 */
-	mark_wakeup_next_waiter(wake_q, lock);
+	deboost = mark_wakeup_next_waiter(wake_q, lock);
 
 	raw_spin_unlock_irqrestore(&lock->wait_lock, flags);
 
 	/* check PI boosting */
-	return true;
+	return deboost;
 }
 
 /*
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ