[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <242969d0-1370-b342-025d-a11b7f59d28f@oracle.com>
Date: Wed, 19 Sep 2018 13:19:10 +0800
From: "jianchao.wang" <jianchao.w.wang@...cle.com>
To: Ming Lei <ming.lei@...hat.com>, linux-kernel@...r.kernel.org
Cc: Tejun Heo <tj@...nel.org>,
Kent Overstreet <kent.overstreet@...il.com>,
linux-block@...r.kernel.org
Subject: Re: [PATCH 2/4] lib/percpu-refcount: introduce percpu_ref_resurge()
Hi Ming
On 09/18/2018 06:19 PM, Ming Lei wrote:
> + unsigned long __percpu *percpu_count;
> +
> + WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));
> +
> + /* get one extra ref for avoiding race with .release */
> + rcu_read_lock_sched();
> + atomic_long_add(1, &ref->count);
> + rcu_read_unlock_sched();
> + }
The rcu_read_lock_sched here is redundant. We have been in the critical section
of a spin_lock_irqsave.
The atomic_long_add(1, &ref->count) may have two result.
1. ref->count > 1
it will not drop to zero any more.
2. ref->count == 1
it has dropped to zero and .release may be running.
Thanks
Jianchao
Powered by blists - more mailing lists