[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1407832463.23412.2.camel@tkhai>
Date: Tue, 12 Aug 2014 12:34:23 +0400
From: Kirill Tkhai <ktkhai@...allels.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: <linux-kernel@...r.kernel.org>, <pjt@...gle.com>,
<oleg@...hat.com>, <rostedt@...dmis.org>,
<umgwanakikbuti@...il.com>, <tkhai@...dex.ru>,
<tim.c.chen@...ux.intel.com>, <mingo@...nel.org>,
<nicolas.pitre@...aro.org>
Subject: Re: [PATCH v4 3/6] sched: Teach scheduler to understand
ONRQ_MIGRATING state
В Вт, 12/08/2014 в 09:55 +0200, Peter Zijlstra пишет:
> On Wed, Aug 06, 2014 at 12:06:19PM +0400, Kirill Tkhai wrote:
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -331,9 +331,13 @@ static inline struct rq *__task_rq_lock(struct task_struct *p)
> > lockdep_assert_held(&p->pi_lock);
> >
> > for (;;) {
> > + while (unlikely(task_migrating(p)))
> > + cpu_relax();
> > +
> > rq = task_rq(p);
> > raw_spin_lock(&rq->lock);
> > - if (likely(rq == task_rq(p)))
> > + if (likely(rq == task_rq(p) &&
> > + !task_migrating(p)))
> > return rq;
> > raw_spin_unlock(&rq->lock);
> > }
> > @@ -349,10 +353,14 @@ static struct rq *task_rq_lock(struct task_struct *p, unsigned long *flags)
> > struct rq *rq;
> >
> > for (;;) {
> > + while (unlikely(task_migrating(p)))
> > + cpu_relax();
> > +
> > raw_spin_lock_irqsave(&p->pi_lock, *flags);
> > rq = task_rq(p);
> > raw_spin_lock(&rq->lock);
> > - if (likely(rq == task_rq(p)))
> > + if (likely(rq == task_rq(p) &&
> > + !task_migrating(p)))
> > return rq;
> > raw_spin_unlock(&rq->lock);
> > raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
>
> I know I suggested that; but I changed it like the below. The advantage
> is of not having two task_migrating() tests on the likely path.
I don't have objections. Should I resend the series (also with new [4/6] log
commentary)?
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -331,15 +331,15 @@ static inline struct rq *__task_rq_lock(
> lockdep_assert_held(&p->pi_lock);
>
> for (;;) {
> - while (unlikely(task_migrating(p)))
> - cpu_relax();
> -
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> if (likely(rq == task_rq(p) &&
> !task_migrating(p)))
> return rq;
> raw_spin_unlock(&rq->lock);
> +
> + while (unlikely(task_migrating(p)))
> + cpu_relax();
> }
> }
>
> @@ -353,9 +353,6 @@ static struct rq *task_rq_lock(struct ta
> struct rq *rq;
>
> for (;;) {
> - while (unlikely(task_migrating(p)))
> - cpu_relax();
> -
> raw_spin_lock_irqsave(&p->pi_lock, *flags);
> rq = task_rq(p);
> raw_spin_lock(&rq->lock);
> @@ -364,6 +361,9 @@ static struct rq *task_rq_lock(struct ta
> return rq;
> raw_spin_unlock(&rq->lock);
> raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
> +
> + while (unlikely(task_migrating(p)))
> + cpu_relax();
> }
> }
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists