[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200808261614.49662.nickpiggin@yahoo.com.au>
Date: Tue, 26 Aug 2008 16:14:49 +1000
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Gregory Haskins <ghaskins@...ell.com>
Cc: mingo@...e.hu, srostedt@...hat.com, peterz@...radead.org,
linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
npiggin@...e.de, gregory.haskins@...il.com
Subject: Re: [PATCH 3/5] sched: make double-lock-balance fair
On Tuesday 26 August 2008 06:15, Gregory Haskins wrote:
> double_lock balance() currently favors logically lower cpus since they
> often do not have to release their own lock to acquire a second lock.
> The result is that logically higher cpus can get starved when there is
> a lot of pressure on the RQs. This can result in higher latencies on
> higher cpu-ids.
>
> This patch makes the algorithm more fair by forcing all paths to have
> to release both locks before acquiring them again. Since callsites to
> double_lock_balance already consider it a potential preemption/reschedule
> point, they have the proper logic to recheck for atomicity violations.
>
> Signed-off-by: Gregory Haskins <ghaskins@...ell.com>
> ---
>
> kernel/sched.c | 17 +++++------------
> 1 files changed, 5 insertions(+), 12 deletions(-)
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 6e0bde6..b7326cd 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -2790,23 +2790,16 @@ static int double_lock_balance(struct rq *this_rq,
> struct rq *busiest) __acquires(busiest->lock)
> __acquires(this_rq->lock)
> {
> - int ret = 0;
> -
> if (unlikely(!irqs_disabled())) {
> /* printk() doesn't work good under rq->lock */
> spin_unlock(&this_rq->lock);
> BUG_ON(1);
> }
> - if (unlikely(!spin_trylock(&busiest->lock))) {
> - if (busiest < this_rq) {
> - spin_unlock(&this_rq->lock);
> - spin_lock(&busiest->lock);
> - spin_lock_nested(&this_rq->lock, SINGLE_DEPTH_NESTING);
> - ret = 1;
> - } else
> - spin_lock_nested(&busiest->lock, SINGLE_DEPTH_NESTING);
> - }
> - return ret;
> +
> + spin_unlock(&this_rq->lock);
> + double_rq_lock(this_rq, busiest);
Rather than adding the extra atomic operation, can't you just put this
into the unlikely spin_trylock failure path rather than the unfair logic
there?
FWIW, this is always going to be a *tiny* bit unfair, because of double
rq lock taking the lower lock first. I guess to fix that you need to just
have a single lock to take before taking 2 rq locks. But that's not
really appropriate for mainline (and maybe not -rt either).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists