[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimXyic51Qhe_WsfFBwAw10AKdB7e-Z2q0oLRYKP@mail.gmail.com>
Date: Fri, 17 Dec 2010 11:06:19 +0800
From: "Yan, Zheng" <zheng.z.yan@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Oleg Nesterov <oleg@...hat.com>,
Chris Mason <chris.mason@...cle.com>,
Frank Rowand <frank.rowand@...sony.com>,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Mike Galbraith <efault@....de>, Paul Turner <pjt@...gle.com>,
Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 5/5] sched: Reduce ttwu rq->lock contention
On Fri, Dec 17, 2010 at 4:32 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> @@ -953,7 +955,7 @@ static inline struct rq *__task_rq_lock(
> for (;;) {
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> - if (likely(rq == task_rq(p)))
> + if (likely(rq == task_rq(p)) && !task_is_waking(p))
> return rq;
> raw_spin_unlock(&rq->lock);
> }
> @@ -973,7 +975,7 @@ static struct rq *task_rq_lock(struct ta
> local_irq_save(*flags);
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> - if (likely(rq == task_rq(p)))
> + if (likely(rq == task_rq(p)) && !task_is_waking(p))
> return rq;
> raw_spin_unlock_irqrestore(&rq->lock, *flags);
> }
Looks like nothing prevents ttwu() from changing task's CPU while
some one else is holding task_rq_lock(). Is this OK?
Thanks
Yan, Zheng
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists