lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2014 10:01:22 +0200
From:	Peter Zijlstra <>
To:	Kirill Tkhai <>
Subject: Re: [PATCH v2 2/5] sched: Teach scheduler to understand

On Sat, Jul 26, 2014 at 06:59:21PM +0400, Kirill Tkhai wrote:

> The profit is that double_rq_lock() is not needed now,
> and this may reduce the latencies in some situations.

> We add a loop in the beginning of set_cpus_allowed_ptr.
> It's like a handmade spinlock, which is similar
> to situation we had before. We used to spin on rq->lock,
> now we spin on "again:" label. Of course, it's worse
> than arch-dependent spinlock, but we have to have it
> here. 

> @@ -4623,8 +4639,16 @@ int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
>  	struct rq *rq;
>  	unsigned int dest_cpu;
>  	int ret = 0;
> +again:
> +	while (unlikely(task_migrating(p)))
> +		cpu_relax();
>  	rq = task_rq_lock(p, &flags);
> +	/* Check again with rq locked */
> +	if (unlikely(task_migrating(p))) {
> +		task_rq_unlock(rq, p, &flags);
> +		goto again;
> +	}
>  	if (cpumask_equal(&p->cpus_allowed, new_mask))
>  		goto out;

So I really dislike that, esp since you're now talking of adding more of
this goo all over the place.

I'll ask again, why isn't this in task_rq_lock() and co?

Also, you really need to talk the spin bounded, otherwise your two
quoted paragraphs above are in contradiction. Now I think you can
actually make an argument that way, so that's good.

Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists