[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1257482803.6956.4.camel@marge.simson.net>
Date: Fri, 06 Nov 2009 05:46:43 +0100
From: Mike Galbraith <efault@....de>
To: Lai Jiangshan <laijs@...fujitsu.com>
Cc: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Eric Paris <eparis@...hat.com>, linux-kernel@...r.kernel.org,
hpa@...or.com, tglx@...utronix.de
Subject: Re: [patch] Re: There is something with scheduler (was Re: [patch]
Re: [regression bisect -next] BUG: using smp_processor_id() in preemptible
[00000000] code: rmmod)
On Fri, 2009-11-06 at 10:31 +0800, Lai Jiangshan wrote:
>
> > +/*
> > + * cpu_rq_lock - lock the runqueue a given task resides on and disable
> > + * interrupts. Note the ordering: we can safely lookup the cpu_rq without
> > + * explicitly disabling preemption.
> > + */
> > +static struct rq *cpu_rq_lock(int cpu, unsigned long *flags)
> > + __acquires(rq->lock)
> > +{
> > + struct rq *rq;
> > +
> > + for (;;) {
> > + local_irq_save(*flags);
> > + rq = cpu_rq(cpu);
> > + spin_lock(&rq->lock);
> > + if (likely(rq == cpu_rq(cpu)))
> > + return rq;
> > + spin_unlock_irqrestore(&rq->lock, *flags);
> > + }
> > +}
> > +
> > +static inline void cpu_rq_unlock(struct rq *rq, unsigned long *flags)
> > + __releases(rq->lock)
> > +{
> > + spin_unlock_irqrestore(&rq->lock, *flags);
> > +}
> > +
>
> The above code is totally garbage, cpu_rq(cpu) is constant.
No, that's not the garbage bit. The true hazard of late late night is
that you can't _see_ anymore. cpu_rq_lock + spin_unlock :)))))
Now I'm _really_ puzzled. Embarrassing, but funny.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists