[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXFytAdeF5RPRERf@fedora>
Date: Thu, 21 Oct 2021 10:01:24 -0400
From: Dennis Zhou <dennis@...nel.org>
To: Pavel Begunkov <asml.silence@...il.com>
Cc: linux-block@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Christoph Lameter <cl@...ux.com>, Tejun Heo <tj@...nel.org>
Subject: Re: [PATCH v2 1/2] percpu_ref: percpu_ref_tryget_live() version
holding RCU
Hello,
On Thu, Oct 21, 2021 at 02:30:51PM +0100, Pavel Begunkov wrote:
> Add percpu_ref_tryget_live_rcu(), which is a version of
> percpu_ref_tryget_live() but the user is responsible for enclosing it in
> a RCU read lock section.
>
> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
> ---
> include/linux/percpu-refcount.h | 33 +++++++++++++++++++++++----------
> 1 file changed, 23 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h
> index ae16a9856305..b31d3f3312ce 100644
> --- a/include/linux/percpu-refcount.h
> +++ b/include/linux/percpu-refcount.h
> @@ -266,6 +266,28 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref)
> return percpu_ref_tryget_many(ref, 1);
> }
>
> +/**
> + * percpu_ref_tryget_live_rcu - same as percpu_ref_tryget_live() but the
> + * caller is responsible for taking RCU.
> + *
> + * This function is safe to call as long as @ref is between init and exit.
> + */
> +static inline bool percpu_ref_tryget_live_rcu(struct percpu_ref *ref)
> +{
> + unsigned long __percpu *percpu_count;
> + bool ret = false;
> +
> + WARN_ON_ONCE(!rcu_read_lock_held());
> +
> + if (likely(__ref_is_percpu(ref, &percpu_count))) {
> + this_cpu_inc(*percpu_count);
> + ret = true;
> + } else if (!(ref->percpu_count_ptr & __PERCPU_REF_DEAD)) {
> + ret = atomic_long_inc_not_zero(&ref->data->count);
> + }
> + return ret;
> +}
> +
> /**
> * percpu_ref_tryget_live - try to increment a live percpu refcount
> * @ref: percpu_ref to try-get
Nit: it's dumb convention at this point, but do you mind copying this
guy up. I like consistency.
> @@ -283,20 +305,11 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref)
> */
> static inline bool percpu_ref_tryget_live(struct percpu_ref *ref)
> {
> - unsigned long __percpu *percpu_count;
> bool ret = false;
>
> rcu_read_lock();
> -
> - if (__ref_is_percpu(ref, &percpu_count)) {
> - this_cpu_inc(*percpu_count);
> - ret = true;
> - } else if (!(ref->percpu_count_ptr & __PERCPU_REF_DEAD)) {
> - ret = atomic_long_inc_not_zero(&ref->data->count);
> - }
> -
> + ret = percpu_ref_tryget_live_rcu(ref);
> rcu_read_unlock();
> -
> return ret;
> }
>
> --
> 2.33.1
>
Currently I'm not carrying anything and I don't expect any percpu_ref
work to come in. Jens, feel free to pick this up.
Acked-by: Dennis Zhou <dennis@...nel.org>
Thanks,
Dennis
Powered by blists - more mailing lists