[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070302142308.GB8226@osiris.boeblingen.de.ibm.com>
Date: Fri, 2 Mar 2007 15:23:08 +0100
From: Heiko Carstens <heiko.carstens@...ibm.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
john stultz <johnstul@...ibm.com>,
Roman Zippel <zippel@...ux-m68k.org>,
Christian Borntraeger <cborntra@...ibm.com>,
linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch] timer/hrtimer: take per cpu locks in sane order
On Fri, Mar 02, 2007 at 02:04:33PM +0100, Ingo Molnar wrote:
>
> * Heiko Carstens <heiko.carstens@...ibm.com> wrote:
>
> > - spin_lock(&new_base->lock);
> > - spin_lock(&old_base->lock);
> > + /*
> > + * If we take a lock from a different cpu, make sure we have always
> > + * the same locking order. That is the lock that belongs to the cpu
> > + * with the lowest number is taken first.
> > + */
> > + lock1 = smp_processor_id() < cpu ? &new_base->lock : &old_base->lock;
> > + lock2 = smp_processor_id() < cpu ? &old_base->lock : &new_base->lock;
> > + spin_lock(lock1);
> > + spin_lock(lock2);
>
> looks good to me. Wouldnt this be cleaner via double_lock_timer() -
> similar to how double_rq_lock() works in kernel/sched.c - instead of
> open-coding it?
Something like the stuff below? Exploits the knowledge that the two
tvec_base_t's are in a per_cpu array. Otherwise I would end up passing
a lot of redundant stuff. But still I think that isn't a good solution
but rather a hack...?
I'd go for the patch above.
---
Index: linux-2.6/kernel/timer.c
===================================================================
--- linux-2.6.orig/kernel/timer.c
+++ linux-2.6/kernel/timer.c
@@ -1640,6 +1640,28 @@ static void migrate_timer_list(tvec_base
}
}
+static void __devinit double_tvec_lock(tvec_base_t *base1, tvec_base_t *base2)
+{
+ if (base1 < base2) {
+ spin_lock(&base1->lock);
+ spin_lock(&base2->lock);
+ } else {
+ spin_lock(&base2->lock);
+ spin_lock(&base1->lock);
+ }
+}
+
+static void __devinit double_tvec_unlock(tvec_base_t *base1, tvec_base_t *base2)
+{
+ if (base1 < base2) {
+ spin_unlock(&base1->lock);
+ spin_unlock(&base2->lock);
+ } else {
+ spin_unlock(&base2->lock);
+ spin_unlock(&base1->lock);
+ }
+}
+
static void __devinit migrate_timers(int cpu)
{
tvec_base_t *old_base;
@@ -1651,8 +1673,7 @@ static void __devinit migrate_timers(int
new_base = get_cpu_var(tvec_bases);
local_irq_disable();
- spin_lock(&new_base->lock);
- spin_lock(&old_base->lock);
+ double_tvec_lock(new_base, old_base);
BUG_ON(old_base->running_timer);
@@ -1665,8 +1686,7 @@ static void __devinit migrate_timers(int
migrate_timer_list(new_base, old_base->tv5.vec + i);
}
- spin_unlock(&old_base->lock);
- spin_unlock(&new_base->lock);
+ double_tvec_unlock(new_base, old_base);
local_irq_enable();
put_cpu_var(tvec_bases);
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists