lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 17 May 2024 16:30:38 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Frederic Weisbecker <frederic@...nel.org>
Cc: LKML <linux-kernel@...r.kernel.org>, Boqun Feng <boqun.feng@...il.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Neeraj Upadhyay <neeraj.upadhyay@....com>,
	Uladzislau Rezki <urezki@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>, rcu <rcu@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 1/2] rcu/tasks: Fix stale task snaphot from TASK-TRACE

On Fri, May 17, 2024 at 05:23:02PM +0200, Frederic Weisbecker wrote:
> When RCU-TASKS-TRACE pre-gp takes a snapshot of the current task running
> on all online CPUs, no explicit ordering enforces a missed context
> switched task to see the pre-GP update side accesses. The following
> diagram, courtesy of Paul, shows the possible bad scenario:
> 
>         CPU 0                                           CPU 1
>         -----                                           -----
> 
>         // Pre-GP update side access
>         WRITE_ONCE(*X, 1);
>         smp_mb();
>         r0 = rq->curr;
>                                                         RCU_INIT_POINTER(rq->curr, TASK_B)
>                                                         spin_unlock(rq)
>                                                         rcu_read_lock_trace()
>                                                         r1 = X;
>         /* ignore TASK_B */
> 
> Either r0==TASK_B or r1==1 is needed but neither is guaranteed.
> 
> One possible solution to solve this is to wait for an RCU grace period
> at the beginning of the TASKS-TRACE grace period before taking the
> current tasks snaphot. However this would introduce more latency to
> TASKS-TRACE update sides.
> 
> Choose another solution: lock the target runqueue lock while taking the
> current task snapshot. This makes sure that the update side sees
> the latest context switch and subsequent context switches will see the
> pre-GP update side accesses.
> 
> Fixes: e386b6725798 ("rcu-tasks: Eliminate RCU Tasks Trace IPIs to online CPUs")
> Signed-off-by: Frederic Weisbecker <frederic@...nel.org>

Excellent catch!!!

Queued for review and testing with the usual wordsmithing shown below.

I am happy to push this via -rcu, but if you were instead looking to
send it via some other path:

Acked-by: Paul E. McKenney <paulmck@...nel.org>

							Thanx, Paul

------------------------------------------------------------------------

commit f04b876e13b8218867f4e4538488c20fbcafc4f0
Author: Frederic Weisbecker <frederic@...nel.org>
Date:   Fri May 17 17:23:02 2024 +0200

    rcu/tasks: Fix stale task snaphot for Tasks Trace
    
    When RCU-TASKS-TRACE pre-gp takes a snapshot of the current task running
    on all online CPUs, no explicit ordering synchronizes properly with a
    context switch.  This lack of ordering can permit the new task to miss
    pre-grace-period update-side accesses.  The following diagram, courtesy
    of Paul, shows the possible bad scenario:
    
            CPU 0                                           CPU 1
            -----                                           -----
    
            // Pre-GP update side access
            WRITE_ONCE(*X, 1);
            smp_mb();
            r0 = rq->curr;
                                                            RCU_INIT_POINTER(rq->curr, TASK_B)
                                                            spin_unlock(rq)
                                                            rcu_read_lock_trace()
                                                            r1 = X;
            /* ignore TASK_B */
    
    Either r0==TASK_B or r1==1 is needed but neither is guaranteed.
    
    One possible solution to solve this is to wait for an RCU grace period
    at the beginning of the RCU-tasks-trace grace period before taking the
    current tasks snaphot. However this would introduce large additional
    latencies to RCU-tasks-trace grace periods.
    
    Another solution is to lock the target runqueue while taking the current
    task snapshot. This ensures that the update side sees the latest context
    switch and subsequent context switches will see the pre-grace-period
    update side accesses.
    
    This commit therefore adds runqueue locking to cpu_curr_snapshot().
    
    Fixes: e386b6725798 ("rcu-tasks: Eliminate RCU Tasks Trace IPIs to online CPUs")
    Signed-off-by: Frederic Weisbecker <frederic@...nel.org>
    Signed-off-by: Paul E. McKenney <paulmck@...nel.org>

diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 8adbd886ad2ee..58d8263c12392 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -1737,6 +1737,9 @@ static void rcu_tasks_trace_pregp_step(struct list_head *hop)
 	// allow safe access to the hop list.
 	for_each_online_cpu(cpu) {
 		rcu_read_lock();
+		// Note that cpu_curr_snapshot() locks the target CPU's
+		// runqueue.  This ensures that subsequent tasks running
+		// on that CPU will see the updater's pre-GP accesses.
 		t = cpu_curr_snapshot(cpu);
 		if (rcu_tasks_trace_pertask_prep(t, true))
 			trc_add_holdout(t, hop);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7019a40457a6d..fa6e60d5e3be3 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4467,12 +4467,7 @@ int task_call_func(struct task_struct *p, task_call_f func, void *arg)
  * @cpu: The CPU on which to snapshot the task.
  *
  * Returns the task_struct pointer of the task "currently" running on
- * the specified CPU.  If the same task is running on that CPU throughout,
- * the return value will be a pointer to that task's task_struct structure.
- * If the CPU did any context switches even vaguely concurrently with the
- * execution of this function, the return value will be a pointer to the
- * task_struct structure of a randomly chosen task that was running on
- * that CPU somewhere around the time that this function was executing.
+ * the specified CPU.
  *
  * If the specified CPU was offline, the return value is whatever it
  * is, perhaps a pointer to the task_struct structure of that CPU's idle
@@ -4486,11 +4481,16 @@ int task_call_func(struct task_struct *p, task_call_f func, void *arg)
  */
 struct task_struct *cpu_curr_snapshot(int cpu)
 {
+	struct rq *rq = cpu_rq(cpu);
 	struct task_struct *t;
+	struct rq_flags rf;
 
-	smp_mb(); /* Pairing determined by caller's synchronization design. */
+	rq_lock_irqsave(rq, &rf);
+	smp_mb__after_spinlock(); /* Pairing determined by caller's synchronization design. */
 	t = rcu_dereference(cpu_curr(cpu));
+	rq_unlock_irqrestore(rq, &rf);
 	smp_mb(); /* Pairing determined by caller's synchronization design. */
+
 	return t;
 }
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ