lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 28 Jan 2011 16:21:58 -0800
From:	Frank Rowand <frank.rowand@...sony.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC:	Chris Mason <chris.mason@...cle.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <efault@....de>,
	Oleg Nesterov <oleg@...hat.com>, Paul Turner <pjt@...gle.com>,
	Jens Axboe <axboe@...nel.dk>,
	Yong Zhang <yong.zhang0@...il.com>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [RFC][PATCH 11/18] sched: Add p->pi_lock to task_rq_lock()

On 01/04/11 06:59, Peter Zijlstra wrote:
> In order to be able to call set_task_cpu() while either holding
> p->pi_lock or task_rq(p)->lock we need to hold both locks in order to
> stabilize task_rq().
> 
> This makes task_rq_lock() acquire both locks, and have
> __task_rq_lock() validate that p->pi_lock is held. This increases the
> locking overhead for most scheduler syscalls but allows reduction of
> rq->lock contention for some scheduler hot paths (ttwu).
> 
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> ---
>  kernel/sched.c |   81 ++++++++++++++++++++++++++-------------------------------
>  1 file changed, 37 insertions(+), 44 deletions(-)
> 
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> 

> @@ -980,10 +972,13 @@ static void __task_rq_unlock(struct rq *
>  	raw_spin_unlock(&rq->lock);
>  }
>  
> -static inline void task_rq_unlock(struct rq *rq, unsigned long *flags)
> +static inline void
> +task_rq_unlock(struct rq *rq, struct task_struct *p, unsigned long *flags)
>  	__releases(rq->lock)
> +	__releases(p->pi_lock)
>  {
> -	raw_spin_unlock_irqrestore(&rq->lock, *flags);
> +	raw_spin_unlock(&rq->lock);
> +	raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
>  }
>  
>  /*

Most of the callers of task_rq_unlock() were also fixed up to reflect
the newly added parameter "*p", but a couple were missed.  By the end
of the patch series that is ok because the couple that were missed
get removed in patches 12 and 13.  But if you want the patch series
to be bisectable (which I think it is otherwise), you might want to
fix those last couple of callers of task_rq_unlock() in this patch.


> @@ -2646,9 +2647,9 @@ void sched_fork(struct task_struct *p, i
>          *
>          * Silence PROVE_RCU.
>          */
> -       rcu_read_lock();
> +       raw_spin_lock_irqsave(&p->pi_lock, flags);
>         set_task_cpu(p, cpu);
> -       rcu_read_unlock();
> +       raw_spin_unlock_irqrestore(&p->pi_lock, flags);

Does "* Silence PROVE_RCU." no longer apply after remove rcu_read_lock() and
rcu_read_unlock()?

-Frank

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ