[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180919075506.GA23172@ming.t460p>
Date: Wed, 19 Sep 2018 15:55:07 +0800
From: Ming Lei <ming.lei@...hat.com>
To: "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc: linux-kernel@...r.kernel.org, Tejun Heo <tj@...nel.org>,
Kent Overstreet <kent.overstreet@...il.com>,
linux-block@...r.kernel.org
Subject: Re: [PATCH 2/4] lib/percpu-refcount: introduce percpu_ref_resurge()
On Wed, Sep 19, 2018 at 01:19:10PM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 09/18/2018 06:19 PM, Ming Lei wrote:
> > + unsigned long __percpu *percpu_count;
> > +
> > + WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));
> > +
> > + /* get one extra ref for avoiding race with .release */
> > + rcu_read_lock_sched();
> > + atomic_long_add(1, &ref->count);
> > + rcu_read_unlock_sched();
> > + }
>
> The rcu_read_lock_sched here is redundant. We have been in the critical section
> of a spin_lock_irqsave.
Right.
>
> The atomic_long_add(1, &ref->count) may have two result.
> 1. ref->count > 1
> it will not drop to zero any more.
> 2. ref->count == 1
> it has dropped to zero and .release may be running.
IMO, both the two cases are fine and supported, or do you have other
concern about this way?
thanks,
Ming
Powered by blists - more mailing lists