lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 17 Feb 2020 13:38:51 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     paulmck@...nel.org
Cc:     rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
        kernel-team@...com, mingo@...nel.org, jiangshanlai@...il.com,
        dipankar@...ibm.com, akpm@...ux-foundation.org,
        mathieu.desnoyers@...icios.com, josh@...htriplett.org,
        tglx@...utronix.de, rostedt@...dmis.org, dhowells@...hat.com,
        edumazet@...gle.com, fweisbec@...il.com, oleg@...hat.com,
        joel@...lfernandes.org
Subject: Re: [PATCH tip/core/rcu 1/3] rcu-tasks: *_ONCE() for
 rcu_tasks_cbs_head

On Fri, Feb 14, 2020 at 04:25:18PM -0800, paulmck@...nel.org wrote:
> From: "Paul E. McKenney" <paulmck@...nel.org>
> 
> The RCU tasks list of callbacks, rcu_tasks_cbs_head, is sampled locklessly
> by rcu_tasks_kthread() when waiting for work to do.  This commit therefore
> applies READ_ONCE() to that lockless sampling and WRITE_ONCE() to the
> single potential store outside of rcu_tasks_kthread.
> 
> This data race was reported by KCSAN.  Not appropriate for backporting
> due to failure being unlikely.

What failure is possible here? AFAICT this is (again) one of them
load-complare-against-constant-discard patterns that are impossible to
mess up.

> Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
> ---
>  kernel/rcu/update.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 6c4b862..a27df76 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -528,7 +528,7 @@ void call_rcu_tasks(struct rcu_head *rhp, rcu_callback_t func)
>  	rhp->func = func;
>  	raw_spin_lock_irqsave(&rcu_tasks_cbs_lock, flags);
>  	needwake = !rcu_tasks_cbs_head;
> -	*rcu_tasks_cbs_tail = rhp;
> +	WRITE_ONCE(*rcu_tasks_cbs_tail, rhp);
>  	rcu_tasks_cbs_tail = &rhp->next;
>  	raw_spin_unlock_irqrestore(&rcu_tasks_cbs_lock, flags);
>  	/* We can't create the thread unless interrupts are enabled. */
> @@ -658,7 +658,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
>  		/* If there were none, wait a bit and start over. */
>  		if (!list) {
>  			wait_event_interruptible(rcu_tasks_cbs_wq,
> -						 rcu_tasks_cbs_head);
> +						 READ_ONCE(rcu_tasks_cbs_head));
>  			if (!rcu_tasks_cbs_head) {
>  				WARN_ON(signal_pending(current));
>  				schedule_timeout_interruptible(HZ/10);
> -- 
> 2.9.5
> 

Powered by blists - more mailing lists