[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZcQzyhcaRUSRo8a9@pavilion.home>
Date: Thu, 8 Feb 2024 02:52:10 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, rcu@...r.kernel.org,
linux-doc@...r.kernel.org, "Paul E. McKenney" <paulmck@...nel.org>,
Chen Zhongjin <chenzhongjin@...wei.com>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Valentin Schneider <vschneid@...hat.com>,
Neeraj Upadhyay <neeraj.iitr10@...il.com>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
Kent Overstreet <kent.overstreet@...ux.dev>,
Andrew Morton <akpm@...ux-foundation.org>,
Heiko Carstens <hca@...ux.ibm.com>, Arnd Bergmann <arnd@...db.de>,
Oleg Nesterov <oleg@...hat.com>,
Christian Brauner <brauner@...nel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Mike Christie <michael.christie@...cle.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Mateusz Guzik <mjguzik@...il.com>,
Nicholas Piggin <npiggin@...il.com>,
Peng Zhang <zhangpeng.00@...edance.com>
Subject: Re: [PATCH 2/2] rcu-tasks: Eliminate deadlocks involving do_exit()
and RCU tasks
Le Wed, Feb 07, 2024 at 11:53:13PM +0100, Frederic Weisbecker a écrit :
> Le Mon, Jan 29, 2024 at 02:57:27PM -0800, Boqun Feng a écrit :
> > From: "Paul E. McKenney" <paulmck@...nel.org>
> >
> > Holding a mutex across synchronize_rcu_tasks() and acquiring
> > that same mutex in code called from do_exit() after its call to
> > exit_tasks_rcu_start() but before its call to exit_tasks_rcu_stop()
> > results in deadlock. This is by design, because tasks that are far
> > enough into do_exit() are no longer present on the tasks list, making
> > it a bit difficult for RCU Tasks to find them, let alone wait on them
> > to do a voluntary context switch. However, such deadlocks are becoming
> > more frequent. In addition, lockdep currently does not detect such
> > deadlocks and they can be difficult to reproduce.
> >
> > In addition, if a task voluntarily context switches during that time
> > (for example, if it blocks acquiring a mutex), then this task is in an
> > RCU Tasks quiescent state. And with some adjustments, RCU Tasks could
> > just as well take advantage of that fact.
> >
> > This commit therefore eliminates these deadlock by replacing the
> > SRCU-based wait for do_exit() completion with per-CPU lists of tasks
> > currently exiting. A given task will be on one of these per-CPU lists for
> > the same period of time that this task would previously have been in the
> > previous SRCU read-side critical section. These lists enable RCU Tasks
> > to find the tasks that have already been removed from the tasks list,
> > but that must nevertheless be waited upon.
> >
> > The RCU Tasks grace period gathers any of these do_exit() tasks that it
> > must wait on, and adds them to the list of holdouts. Per-CPU locking
> > and get_task_struct() are used to synchronize addition to and removal
> > from these lists.
> >
> > Link: https://lore.kernel.org/all/20240118021842.290665-1-chenzhongjin@huawei.com/
> >
> > Reported-by: Chen Zhongjin <chenzhongjin@...wei.com>
> > Signed-off-by: Paul E. McKenney <paulmck@...nel.org>
>
> With that, I think we can now revert 28319d6dc5e2 (rcu-tasks: Fix
> synchronize_rcu_tasks() VS zap_pid_ns_processes()). Because if the task
> is in rcu_tasks_exit_list, it's treated just like the others and must go
> through check_holdout_task(). Therefore and unlike with the previous srcu thing,
> a task sleeping between exit_tasks_rcu_start() and exit_tasks_rcu_finish() is
> now a quiescent state. And that kills the possible deadlock.
>
> > -void exit_tasks_rcu_start(void) __acquires(&tasks_rcu_exit_srcu)
> > +void exit_tasks_rcu_start(void)
> > {
> > - current->rcu_tasks_idx = __srcu_read_lock(&tasks_rcu_exit_srcu);
> > + unsigned long flags;
> > + struct rcu_tasks_percpu *rtpcp;
> > + struct task_struct *t = current;
> > +
> > + WARN_ON_ONCE(!list_empty(&t->rcu_tasks_exit_list));
> > + get_task_struct(t);
>
> Is this get_task_struct() necessary?
>
> > + preempt_disable();
> > + rtpcp = this_cpu_ptr(rcu_tasks.rtpcpu);
> > + t->rcu_tasks_exit_cpu = smp_processor_id();
> > + raw_spin_lock_irqsave_rcu_node(rtpcp, flags);
>
> Do we really need smp_mb__after_unlock_lock() ?
Or maybe it orders add into rtpcp->rtp_exit_list VS
main tasklist's removal? Such that:
synchronize_rcu_tasks() do_exit()
---------------------- ---------
//for_each_process_thread()
READ tasklist WRITE rtpcp->rtp_exit_list
LOCK rtpcp->lock UNLOCK rtpcp->lock
smp_mb__after_unlock_lock() WRITE tasklist //unhash_process()
READ rtpcp->rtp_exit_list
Does this work? Hmm, I'll play with litmus once I have a fresh brain...
Thanks.
Powered by blists - more mailing lists