lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190131182557.GA3873@andrea>
Date:   Thu, 31 Jan 2019 19:25:57 +0100
From:   Andrea Parri <andrea.parri@...rulasolutions.com>
To:     linux-kernel@...r.kernel.org
Cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Paul E. McKenney" <paulmck@...ux.ibm.com>,
        Alan Stern <stern@...land.harvard.edu>,
        Will Deacon <will.deacon@....com>
Subject: Re: [PATCH v2] sched: Use READ_ONCE()/WRITE_ONCE() in
 move_queued_task()/task_rq_lock()

On Mon, Jan 21, 2019 at 04:52:40PM +0100, Andrea Parri wrote:
> move_queued_task() synchronizes with task_rq_lock() as follows:
> 
> 	move_queued_task()		task_rq_lock()
> 
> 	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
> 	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
> 	[S] ->cpu = new_cpu		[L] ->on_rq
> 
> where "[L] rq = task_rq()" is ordered before "ACQUIRE (rq->lock)" by an
> address dependency and, in turn, "ACQUIRE (rq->lock)" is ordered before
> "[L] ->on_rq" by the ACQUIRE itself.
> 
> Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honor
> this address dependency.  Also, mark the accesses to ->cpu and ->on_rq
> with READ_ONCE()/WRITE_ONCE() to comply with the LKMM.
> 
> Signed-off-by: Andrea Parri <andrea.parri@...rulasolutions.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: "Paul E. McKenney" <paulmck@...ux.ibm.com>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Will Deacon <will.deacon@....com>

ping

  Andrea


> ---
>  Changes in v2:
>  - mark accesses to ->on_rq as well
>  - update inline comment for task_rq_lock()
>  - minor editing in the subject/changelog
>  
>  include/linux/sched.h | 4 ++--
>  kernel/sched/core.c   | 9 +++++----
>  kernel/sched/sched.h  | 6 +++---
>  3 files changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d2f90fa924683..41212d725a0eb 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1754,9 +1754,9 @@ static __always_inline bool need_resched(void)
>  static inline unsigned int task_cpu(const struct task_struct *p)
>  {
>  #ifdef CONFIG_THREAD_INFO_IN_TASK
> -	return p->cpu;
> +	return READ_ONCE(p->cpu);
>  #else
> -	return task_thread_info(p)->cpu;
> +	return READ_ONCE(task_thread_info(p)->cpu);
>  #endif
>  }
>  
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index a674c7db2f29d..d6e08faaa2843 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -107,11 +107,12 @@ struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
>  		 *					[L] ->on_rq
>  		 *	RELEASE (rq->lock)
>  		 *
> -		 * If we observe the old CPU in task_rq_lock, the acquire of
> +		 * If we observe the old CPU in task_rq_lock(), the acquire of
>  		 * the old rq->lock will fully serialize against the stores.
>  		 *
> -		 * If we observe the new CPU in task_rq_lock, the acquire will
> -		 * pair with the WMB to ensure we must then also see migrating.
> +		 * If we observe the new CPU in task_rq_lock(), the address
> +		 * dependency headed by '[L] rq = task_rq()' and the acquire
> +		 * will pair with the WMB to ensure we then also see migrating.
>  		 */
>  		if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
>  			rq_pin_lock(rq, rf);
> @@ -915,7 +916,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
>  {
>  	lockdep_assert_held(&rq->lock);
>  
> -	p->on_rq = TASK_ON_RQ_MIGRATING;
> +	WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
>  	dequeue_task(rq, p, DEQUEUE_NOCLOCK);
>  	set_task_cpu(p, new_cpu);
>  	rq_unlock(rq, rf);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index d04530bf251fe..425a5589e5f60 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
>  	 */
>  	smp_wmb();
>  #ifdef CONFIG_THREAD_INFO_IN_TASK
> -	p->cpu = cpu;
> +	WRITE_ONCE(p->cpu, cpu);
>  #else
> -	task_thread_info(p)->cpu = cpu;
> +	WRITE_ONCE(task_thread_info(p)->cpu, cpu);
>  #endif
>  	p->wake_cpu = cpu;
>  #endif
> @@ -1563,7 +1563,7 @@ static inline int task_on_rq_queued(struct task_struct *p)
>  
>  static inline int task_on_rq_migrating(struct task_struct *p)
>  {
> -	return p->on_rq == TASK_ON_RQ_MIGRATING;
> +	return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
>  }
>  
>  /*
> -- 
> 2.17.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ