[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1350549005-18309-1-git-send-email-meltedpianoman@gmail.com>
Date: Thu, 18 Oct 2012 10:30:05 +0200
From: Ivo Sieben <meltedpianoman@...il.com>
To: <linux-kernel@...r.kernel.org>, Andi Kleen <andi@...stfloor.org>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>
CC: <linux-serial@...r.kernel.org>, Alan Cox <alan@...ux.intel.com>,
Greg KH <gregkh@...uxfoundation.org>,
Ivo Sieben <meltedpianoman@...il.com>
Subject: [PATCH-v2] sched: Prevent wakeup to enter critical section needlessly
Check the waitqueue task list to be non empty before entering the critical
section. This prevents locking the spin lock needlessly in case the queue
was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
system.
Signed-off-by: Ivo Sieben <meltedpianoman@...il.com>
---
v2:
- We don't need the "careful" list empty, a normal list empty is sufficient:
if you miss an update it was just as it happened a little later.
- Because of memory ordering problems we can observe an unupdated list
administration. This can cause an wait_event-like code to miss an event.
Adding a memory barrier befor checking the list to be empty will guarantee we
evaluate a 100% updated list adminsitration.
kernel/sched/core.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2d8927f..168a9b2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
{
unsigned long flags;
- spin_lock_irqsave(&q->lock, flags);
- __wake_up_common(q, mode, nr_exclusive, 0, key);
- spin_unlock_irqrestore(&q->lock, flags);
+ /*
+ * We check for list emptiness outside the lock. This prevents the wake
+ * up to enter the critical section needlessly when the task list is
+ * empty.
+ *
+ * Placed a full memory barrier before checking list emptiness to make
+ * 100% sure this function sees an up-to-date list administration.
+ * Note that other code that manipulates the list uses a spin_lock and
+ * therefore doesn't need additional memory barriers.
+ */
+ smp_mb();
+ if (!list_empty(&q->task_list)) {
+ spin_lock_irqsave(&q->lock, flags);
+ __wake_up_common(q, mode, nr_exclusive, 0, key);
+ spin_unlock_irqrestore(&q->lock, flags);
+ }
}
EXPORT_SYMBOL(__wake_up);
--
1.7.9.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists