[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180521232537.GJ3803@linux.vnet.ibm.com>
Date: Mon, 21 May 2018 16:25:38 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Joel Fernandes <joelaf@...gle.com>
Cc: linux-kernel@...r.kernel.org,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>, byungchul.park@....com,
kernel-team@...roid.com
Subject: Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing
its gp_seq_needed
On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> after the access.
Actually, no, we hold rnp_start's ->lock throughout. And this CPU (or in
the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
so nothing else is accessing it. Or at least that is the intent. ;-)
One exception is CPU hotplug, but in that case, only the CPU doing the
hotplugging is allowed to touch rdp->gp_seq_needed and even then only
while the incoming/outgoing CPU is inactive.
Thanx, Paul
> Signed-off-by: Joel Fernandes <joel@...lfernandes.org>
> ---
> kernel/rcu/tree.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 879c67a31116..efbd21b2a1a6 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1603,13 +1603,13 @@ static bool rcu_start_this_gp(struct rcu_node *rnp_start, struct rcu_data *rdp,
> trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), TPS("newreq"));
> ret = true; /* Caller must wake GP kthread. */
> unlock_out:
> - if (rnp != rnp_start)
> - raw_spin_unlock_rcu_node(rnp);
> /* Push furthest requested GP to leaf node and rcu_data structure. */
> if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) {
> rnp_start->gp_seq_needed = gp_seq_req;
> rdp->gp_seq_needed = gp_seq_req;
> }
> + if (rnp != rnp_start)
> + raw_spin_unlock_rcu_node(rnp);
> return ret;
> }
>
> --
> 2.17.0.441.gb46fe60e1d-goog
>
Powered by blists - more mailing lists