[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1348491997-30898-1-git-send-email-meltedpianoman@gmail.com>
Date: Mon, 24 Sep 2012 15:06:37 +0200
From: Ivo Sieben <meltedpianoman@...il.com>
To: <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
<linux-serial@...r.kernel.org>,
RT <linux-rt-users@...r.kernel.org>,
Alan Cox <alan@...ux.intel.com>,
Greg KH <gregkh@...uxfoundation.org>
CC: Ivo Sieben <meltedpianoman@...il.com>
Subject: [PATCH] RFC: sched: Prevent wakeup to enter critical section needlessly
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben <meltedpianoman@...il.com>
---
Request for comments:
- Does this make any sense?
- I assume that I can safely use the list_empty_careful() function here, but is
that correct?
Background to this patch:
Testing on a PREEMPT_RT system with TTY serial communication. Each time the TTY
line discipline is dereferenced the Idle handling wait queue is woken up (see
function put_ldisc in /drivers/tty/tty_ldisc.c)
However line discipline idle handling is not used very often so the wait queue
is empty most of the time. But still the wake_up() function enters the critical
section guarded by spin locks. This causes additional scheduling overhead when
a lower priority thread has control of that same lock.
kernel/sched/core.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 649c9f8..6436eb8 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3631,9 +3631,19 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
{
unsigned long flags;
- spin_lock_irqsave(&q->lock, flags);
- __wake_up_common(q, mode, nr_exclusive, 0, key);
- spin_unlock_irqrestore(&q->lock, flags);
+ /*
+ * We can check for list emptiness outside the lock by using the
+ * "careful" check that verifies both the next and prev pointers, so
+ * that there cannot be any half-pending updates in progress.
+ *
+ * This prevents the wake up to enter the critical section needlessly
+ * when the task list is empty.
+ */
+ if (!list_empty_careful(&q->task_list)) {
+ spin_lock_irqsave(&q->lock, flags);
+ __wake_up_common(q, mode, nr_exclusive, 0, key);
+ spin_unlock_irqrestore(&q->lock, flags);
+ }
}
EXPORT_SYMBOL(__wake_up);
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists