[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170922184610.GT3521@linux.vnet.ibm.com>
Date: Fri, 22 Sep 2017 11:46:10 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] srcu: queue work without holding the lock
On Fri, Sep 22, 2017 at 05:28:05PM +0200, Sebastian Andrzej Siewior wrote:
> On RT we can't invoke queue_delayed_work() within an atomic section
> (which is provided by raw_spin_lock_irqsave()).
> srcu_reschedule() invokes queue_delayed_work() outside of the
> raw_spin_lock_irq_rcu_node() section so this should be fine here, too.
> If the remaining callers of call_srcu() aren't atomic
> (spin_lock_irqsave() is fine) then this should work on RT, too.
Just to make sure I understand... The problem is not the _irqsave,
but rather the raw_?
Thanx, Paul
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> ---
> kernel/rcu/srcutree.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
> index d190af0e56f8..3ee4ef40f23e 100644
> --- a/kernel/rcu/srcutree.c
> +++ b/kernel/rcu/srcutree.c
> @@ -648,12 +648,17 @@ static void srcu_funnel_gp_start(struct srcu_struct *sp, struct srcu_data *sdp,
> /* If grace period not already done and none in progress, start it. */
> if (!rcu_seq_done(&sp->srcu_gp_seq, s) &&
> rcu_seq_state(sp->srcu_gp_seq) == SRCU_STATE_IDLE) {
> + unsigned long delay;
> +
> WARN_ON_ONCE(ULONG_CMP_GE(sp->srcu_gp_seq, sp->srcu_gp_seq_needed));
> srcu_gp_start(sp);
> + delay = srcu_get_delay(sp);
> + raw_spin_unlock_irqrestore_rcu_node(sp, flags);
> +
> queue_delayed_work(system_power_efficient_wq, &sp->work,
> - srcu_get_delay(sp));
> - }
> - raw_spin_unlock_irqrestore_rcu_node(sp, flags);
> + delay);
> + } else
> + raw_spin_unlock_irqrestore_rcu_node(sp, flags);
> }
>
> /*
> --
> 2.14.1
>
Powered by blists - more mailing lists