[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1911081813330.1687@www.lameter.com>
Date: Fri, 8 Nov 2019 18:17:47 +0000 (UTC)
From: Christopher Lameter <cl@...ux.com>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
cc: Dennis Zhou <dennis@...nel.org>, linux-kernel@...r.kernel.org,
Tejun Heo <tj@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
"Paul E. McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH v2] percpu-refcount: Use normal instead of RCU-sched"
On Fri, 8 Nov 2019, Sebastian Andrzej Siewior wrote:
> diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
> index 7aef0abc194a2..390031e816dcd 100644
> --- a/include/linux/percpu-refcount.h
> +++ b/include/linux/percpu-refcount.h
> @@ -186,14 +186,14 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
> {
> unsigned long __percpu *percpu_count;
>
> - rcu_read_lock_sched();
> + rcu_read_lock();
>
> if (__ref_is_percpu(ref, &percpu_count))
> this_cpu_add(*percpu_count, nr);
You can use
__this_cpu_add()
instead since rcu_read_lock implies preempt disable.
This will not change the code for x86 but other platforms that do not use
atomic operation here will be able to avoid including to code to disable
preempt for the per cpu operations.
Same is valid for all other per cpu operations in the patch.
Powered by blists - more mailing lists