[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140728080122.GL6758@twins.programming.kicks-ass.net>
Date: Mon, 28 Jul 2014 10:01:22 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Kirill Tkhai <tkhai@...dex.ru>
Cc: linux-kernel@...r.kernel.org, nicolas.pitre@...aro.org,
pjt@...gle.com, oleg@...hat.com, rostedt@...dmis.org,
umgwanakikbuti@...il.com, ktkhai@...allels.com,
tim.c.chen@...ux.intel.com, mingo@...nel.org
Subject: Re: [PATCH v2 2/5] sched: Teach scheduler to understand
ONRQ_MIGRATING state
On Sat, Jul 26, 2014 at 06:59:21PM +0400, Kirill Tkhai wrote:
> The profit is that double_rq_lock() is not needed now,
> and this may reduce the latencies in some situations.
> We add a loop in the beginning of set_cpus_allowed_ptr.
> It's like a handmade spinlock, which is similar
> to situation we had before. We used to spin on rq->lock,
> now we spin on "again:" label. Of course, it's worse
> than arch-dependent spinlock, but we have to have it
> here.
> @@ -4623,8 +4639,16 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
> struct rq *rq;
> unsigned int dest_cpu;
> int ret = 0;
> +again:
> + while (unlikely(task_migrating(p)))
> + cpu_relax();
>
> rq = task_rq_lock(p, &flags);
> + /* Check again with rq locked */
> + if (unlikely(task_migrating(p))) {
> + task_rq_unlock(rq, p, &flags);
> + goto again;
> + }
>
> if (cpumask_equal(&p->cpus_allowed, new_mask))
> goto out;
So I really dislike that, esp since you're now talking of adding more of
this goo all over the place.
I'll ask again, why isn't this in task_rq_lock() and co?
Also, you really need to talk the spin bounded, otherwise your two
quoted paragraphs above are in contradiction. Now I think you can
actually make an argument that way, so that's good.
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists