lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Sep 2018 08:00:50 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Tejun Heo <tj@...nel.org>
Cc:     linux-kernel@...r.kernel.org,
        Jianchao Wang <jianchao.w.wang@...cle.com>,
        Kent Overstreet <kent.overstreet@...il.com>,
        linux-block@...r.kernel.org
Subject: Re: [PATCH] percpu-refcount: relax limit on percpu_ref_reinit()

Hi Tejun,

On Mon, Sep 10, 2018 at 09:49:20AM -0700, Tejun Heo wrote:
> Hello, Ming.
> 
> On Sun, Sep 09, 2018 at 08:58:24PM +0800, Ming Lei wrote:
> > @@ -196,15 +197,6 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref)
> >  
> >  	atomic_long_add(PERCPU_COUNT_BIAS, &ref->count);
> >  
> > -	/*
> > -	 * Restore per-cpu operation.  smp_store_release() is paired
> > -	 * with READ_ONCE() in __ref_is_percpu() and guarantees that the
> > -	 * zeroing is visible to all percpu accesses which can see the
> > -	 * following __PERCPU_REF_ATOMIC clearing.
> > -	 */
> 
> So, while the location of percpu counter resetting moved, the pairing
> of store_release and READ_ONCE is still required to ensure that the
> clearing is visible before the switching to percpu mode becomes
> effective.  Can you please rephrase and keep the above comment?

OK, will do it in V2.

> 
> > -	for_each_possible_cpu(cpu)
> > -		*per_cpu_ptr(percpu_count, cpu) = 0;
> > -
> >  	smp_store_release(&ref->percpu_count_ptr,
> >  			  ref->percpu_count_ptr & ~__PERCPU_REF_ATOMIC);
> >  }
> ...
> > @@ -357,10 +349,11 @@ EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);
> >  void percpu_ref_reinit(struct percpu_ref *ref)
> >  {
> >  	unsigned long flags;
> > +	unsigned long __percpu *percpu_count;
> >  
> >  	spin_lock_irqsave(&percpu_ref_switch_lock, flags);
> >  
> > -	WARN_ON_ONCE(!percpu_ref_is_zero(ref));
> > +	WARN_ON_ONCE(__ref_is_percpu(ref, &percpu_count));
> 
> Can you elaborate this part?  This doesn't seem required for the
> described change.  Why is it part of the patch?

The motivation of this patch is to avoid the above warning and allow
the ref to switch back to percpu mode without dropping to zero.

That is why the check has to be changed to the above way.


Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ