lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1296753410.26581.463.camel@laptop>
Date:	Thu, 03 Feb 2011 18:16:50 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	frank.rowand@...sony.com
Cc:	Chris Mason <chris.mason@...cle.com>, Ingo Molnar <mingo@...e.hu>,
	Thomas Gleixner <tglx@...utronix.de>,
	Mike Galbraith <efault@....de>,
	Oleg Nesterov <oleg@...hat.com>, Paul Turner <pjt@...gle.com>,
	Jens Axboe <axboe@...nel.dk>,
	Yong Zhang <yong.zhang0@...il.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [RFC][PATCH 11/18] sched: Add p->pi_lock to task_rq_lock()

On Fri, 2011-01-28 at 16:21 -0800, Frank Rowand wrote:
> On 01/04/11 06:59, Peter Zijlstra wrote:
> > In order to be able to call set_task_cpu() while either holding
> > p->pi_lock or task_rq(p)->lock we need to hold both locks in order to
> > stabilize task_rq().
> > 
> > This makes task_rq_lock() acquire both locks, and have
> > __task_rq_lock() validate that p->pi_lock is held. This increases the
> > locking overhead for most scheduler syscalls but allows reduction of
> > rq->lock contention for some scheduler hot paths (ttwu).
> > 
> > Signed-off-by: Peter Zijlstra <a.p.zijlstra@...llo.nl>
> > ---
> >  kernel/sched.c |   81 ++++++++++++++++++++++++++-------------------------------
> >  1 file changed, 37 insertions(+), 44 deletions(-)
> > 
> > Index: linux-2.6/kernel/sched.c
> > ===================================================================
> > 
> 
> > @@ -980,10 +972,13 @@ static void __task_rq_unlock(struct rq *
> >  	raw_spin_unlock(&rq->lock);
> >  }
> >  
> > -static inline void task_rq_unlock(struct rq *rq, unsigned long *flags)
> > +static inline void
> > +task_rq_unlock(struct rq *rq, struct task_struct *p, unsigned long *flags)
> >  	__releases(rq->lock)
> > +	__releases(p->pi_lock)
> >  {
> > -	raw_spin_unlock_irqrestore(&rq->lock, *flags);
> > +	raw_spin_unlock(&rq->lock);
> > +	raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
> >  }
> >  
> >  /*
> 
> Most of the callers of task_rq_unlock() were also fixed up to reflect
> the newly added parameter "*p", but a couple were missed.  By the end
> of the patch series that is ok because the couple that were missed
> get removed in patches 12 and 13.  But if you want the patch series
> to be bisectable (which I think it is otherwise), you might want to
> fix those last couple of callers of task_rq_unlock() in this patch.

Fixed those up indeed, thanks!

> 
> > @@ -2646,9 +2647,9 @@ void sched_fork(struct task_struct *p, i
> >          *
> >          * Silence PROVE_RCU.
> >          */
> > -       rcu_read_lock();
> > +       raw_spin_lock_irqsave(&p->pi_lock, flags);
> >         set_task_cpu(p, cpu);
> > -       rcu_read_unlock();
> > +       raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> 
> Does "* Silence PROVE_RCU." no longer apply after remove rcu_read_lock() and
> rcu_read_unlock()?

I think the locking is still strictly superfluous, I can't seem to
recollect why I changed it from RCU to pi_lock, but since the task is
fresh and unhashed it really cannot be subject to concurrency.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ