lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1623817648.jh19a97b5a.astroid@bobo.none>
Date:   Wed, 16 Jun 2021 14:28:54 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     Andy Lutomirski <luto@...nel.org>, x86@...nel.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...el.com>,
        LKML <linux-kernel@...r.kernel.org>, linux-mm@...ck.org,
        Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 5/8] membarrier, kthread: Use _ONCE accessors for task->mm

Excerpts from Andy Lutomirski's message of June 16, 2021 1:21 pm:
> membarrier reads cpu_rq(remote cpu)->curr->mm without locking.  Use
> READ_ONCE() and WRITE_ONCE() to remove the data races.
> 
> Cc: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> Cc: Nicholas Piggin <npiggin@...il.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Signed-off-by: Andy Lutomirski <luto@...nel.org>
> ---
>  fs/exec.c                 | 2 +-
>  kernel/kthread.c          | 4 ++--
>  kernel/sched/membarrier.c | 6 +++---
>  3 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/exec.c b/fs/exec.c
> index 18594f11c31f..2e63dea83411 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1007,7 +1007,7 @@ static int exec_mmap(struct mm_struct *mm)
>  	local_irq_disable();
>  	active_mm = tsk->active_mm;
>  	tsk->active_mm = mm;
> -	tsk->mm = mm;
> +	WRITE_ONCE(tsk->mm, mm);  /* membarrier reads this without locks */
>  	/*
>  	 * This prevents preemption while active_mm is being loaded and
>  	 * it and mm are being updated, which could cause problems for
> diff --git a/kernel/kthread.c b/kernel/kthread.c
> index 8275b415acec..4962794e02d5 100644
> --- a/kernel/kthread.c
> +++ b/kernel/kthread.c
> @@ -1322,7 +1322,7 @@ void kthread_use_mm(struct mm_struct *mm)
>  		mmgrab(mm);
>  		tsk->active_mm = mm;
>  	}
> -	tsk->mm = mm;
> +	WRITE_ONCE(tsk->mm, mm);  /* membarrier reads this without locks */
>  	membarrier_update_current_mm(mm);
>  	switch_mm_irqs_off(active_mm, mm, tsk);
>  	membarrier_finish_switch_mm(atomic_read(&mm->membarrier_state));
> @@ -1363,7 +1363,7 @@ void kthread_unuse_mm(struct mm_struct *mm)
>  	smp_mb__after_spinlock();
>  	sync_mm_rss(mm);
>  	local_irq_disable();
> -	tsk->mm = NULL;
> +	WRITE_ONCE(tsk->mm, NULL);  /* membarrier reads this without locks */
>  	membarrier_update_current_mm(NULL);
>  	/* active_mm is still 'mm' */
>  	enter_lazy_tlb(mm, tsk);
> diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
> index 3173b063d358..c32c32a2441e 100644
> --- a/kernel/sched/membarrier.c
> +++ b/kernel/sched/membarrier.c
> @@ -410,7 +410,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
>  			goto out;
>  		rcu_read_lock();
>  		p = rcu_dereference(cpu_rq(cpu_id)->curr);
> -		if (!p || p->mm != mm) {
> +		if (!p || READ_ONCE(p->mm) != mm) {
>  			rcu_read_unlock();
>  			goto out;
>  		}
> @@ -423,7 +423,7 @@ static int membarrier_private_expedited(int flags, int cpu_id)
>  			struct task_struct *p;
>  
>  			p = rcu_dereference(cpu_rq(cpu)->curr);
> -			if (p && p->mm == mm)
> +			if (p && READ_ONCE(p->mm) == mm)

/* exec, kthread_use_mm write this without locks */ ?

Seems good to me.

Acked-by: Nicholas Piggin <npiggin@...il.com>

>  				__cpumask_set_cpu(cpu, tmpmask);
>  		}
>  		rcu_read_unlock();
> @@ -521,7 +521,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
>  		struct task_struct *p;
>  
>  		p = rcu_dereference(rq->curr);
> -		if (p && p->mm == mm)
> +		if (p && READ_ONCE(p->mm) == mm)
>  			__cpumask_set_cpu(cpu, tmpmask);
>  	}
>  	rcu_read_unlock();
> -- 
> 2.31.1
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ