[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50AA07FA.5020503@linux.vnet.ibm.com>
Date: Mon, 19 Nov 2012 15:50:42 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Ivo Sieben <meltedpianoman@...il.com>
CC: linux-kernel@...r.kernel.org, Andi Kleen <andi@...stfloor.org>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, linux-serial@...r.kernel.org,
Alan Cox <alan@...ux.intel.com>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [REPOST-v2] sched: Prevent wakeup to enter critical section needlessly
Hi Ivo,
On 11/19/2012 01:00 PM, Ivo Sieben wrote:
> Check the waitqueue task list to be non empty before entering the critical
> section. This prevents locking the spin lock needlessly in case the queue
> was empty, and therefor also prevent scheduling overhead on a PREEMPT_RT
> system.
>
> Signed-off-by: Ivo Sieben <meltedpianoman@...il.com>
> ---
>
> a second repost of this patch v2: Can anyone respond?
> Did I apply the memory barrier correct?
>
> v2:
> - We don't need the "careful" list empty, a normal list empty is sufficient:
> if you miss an update it was just as it happened a little later.
> - Because of memory ordering problems we can observe an unupdated list
> administration. This can cause an wait_event-like code to miss an event.
> Adding a memory barrier befor checking the list to be empty will guarantee we
> evaluate a 100% updated list adminsitration.
>
> kernel/sched/core.c | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 2d8927f..168a9b2 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
> {
> unsigned long flags;
>
> - spin_lock_irqsave(&q->lock, flags);
> - __wake_up_common(q, mode, nr_exclusive, 0, key);
> - spin_unlock_irqrestore(&q->lock, flags);
> + /*
> + * We check for list emptiness outside the lock. This prevents the wake
> + * up to enter the critical section needlessly when the task list is
> + * empty.
> + *
> + * Placed a full memory barrier before checking list emptiness to make
> + * 100% sure this function sees an up-to-date list administration.
> + * Note that other code that manipulates the list uses a spin_lock and
> + * therefore doesn't need additional memory barriers.
> + */
> + smp_mb();
> + if (!list_empty(&q->task_list)) {
> + spin_lock_irqsave(&q->lock, flags);
> + __wake_up_common(q, mode, nr_exclusive, 0, key);
> + spin_unlock_irqrestore(&q->lock, flags);
> + }
> }
> EXPORT_SYMBOL(__wake_up);
>
>
Looks good to me.
Reviewed-by: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
Regards
Preeti U Murthy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists