[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120615204500.GH31184@leaf>
Date: Fri, 15 Jun 2012 13:45:00 -0700
From: Josh Triplett <josh@...htriplett.org>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: linux-kernel@...r.kernel.org, mingo@...e.hu, laijs@...fujitsu.com,
dipankar@...ibm.com, akpm@...ux-foundation.org,
mathieu.desnoyers@...ymtl.ca, niv@...ibm.com, tglx@...utronix.de,
peterz@...radead.org, rostedt@...dmis.org, Valdis.Kletnieks@...edu,
dhowells@...hat.com, eric.dumazet@...il.com, darren@...art.com,
fweisbec@...il.com, patches@...aro.org,
"Paul E. McKenney" <paul.mckenney@...aro.org>
Subject: Re: [PATCH tip/core/rcu 03/14] rcu: Add ACCESS_ONCE() to ->qlen
accesses
On Fri, Jun 15, 2012 at 01:13:04PM -0700, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paul.mckenney@...aro.org>
>
> The _rcu_barrier() function accesses other CPUs' rcu_data structure's
> ->qlen field without benefit of locking. This commit therefore adds
> the required ACCESS_ONCE() wrappers around accesses and updates that
> need it.
This type of restriction makes me wonder if we could add some kind of
attribute to fields like qlen to make GCC or sparse help enforce this.
> ACCESS_ONCE() is not needed when a CPU accesses its own ->qlen, or
> in code that cannot run while _rcu_barrier() is sampling ->qlen fields.
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@...htriplett.org>
> kernel/rcutree.c | 8 ++++----
> 1 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> index d938671..cdc101e 100644
> --- a/kernel/rcutree.c
> +++ b/kernel/rcutree.c
> @@ -1349,7 +1349,7 @@ rcu_send_cbs_to_orphanage(int cpu, struct rcu_state *rsp,
> rsp->qlen += rdp->qlen;
> rdp->n_cbs_orphaned += rdp->qlen;
> rdp->qlen_lazy = 0;
> - rdp->qlen = 0;
> + ACCESS_ONCE(rdp->qlen) = 0;
> }
>
> /*
> @@ -1597,7 +1597,7 @@ static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp)
> }
> smp_mb(); /* List handling before counting for rcu_barrier(). */
> rdp->qlen_lazy -= count_lazy;
> - rdp->qlen -= count;
> + ACCESS_ONCE(rdp->qlen) -= count;
> rdp->n_cbs_invoked += count;
>
> /* Reinstate batch limit if we have worked down the excess. */
> @@ -1886,7 +1886,7 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu),
> rdp = this_cpu_ptr(rsp->rda);
>
> /* Add the callback to our list. */
> - rdp->qlen++;
> + ACCESS_ONCE(rdp->qlen)++;
> if (lazy)
> rdp->qlen_lazy++;
> else
> @@ -2420,7 +2420,7 @@ rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp)
> rdp->grpmask = 1UL << (cpu - rdp->mynode->grplo);
> init_callback_list(rdp);
> rdp->qlen_lazy = 0;
> - rdp->qlen = 0;
> + ACCESS_ONCE(rdp->qlen) = 0;
> rdp->dynticks = &per_cpu(rcu_dynticks, cpu);
> WARN_ON_ONCE(rdp->dynticks->dynticks_nesting != DYNTICK_TASK_EXIT_IDLE);
> WARN_ON_ONCE(atomic_read(&rdp->dynticks->dynticks) != 1);
> --
> 1.7.8
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists