lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131031135714.GE8976@redhat.com>
Date:	Thu, 31 Oct 2013 15:57:14 +0200
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc:	linux-kernel <linux-kernel@...r.kernel.org>, kvm@...r.kernel.org,
	gleb@...hat.com, pbonzini@...hat.com
Subject: Re: [PATCH RFC] kvm: optimize out smp_mb using srcu_read_unlock

On Wed, Oct 30, 2013 at 09:56:29PM -0700, Paul E. McKenney wrote:
> On Thu, Oct 31, 2013 at 01:26:05AM +0200, Michael S. Tsirkin wrote:
> > > > Paul, could you review this patch please?
> > > > Documentation/memory-barriers.txt says that unlock has a weaker
> > > > uni-directional barrier, but in practice srcu_read_unlock calls
> > > > smp_mb().
> > > > 
> > > > Is it OK to rely on this? If not, can I add
> > > > smp_mb__after_srcu_read_unlock (making it an empty macro for now)
> > > > so we can avoid an actual extra smp_mb()?
> > > 
> > > Please use smp_mb__after_srcu_read_unlock().  After all, it was not
> > > that long ago that srcu_read_unlock() contained no memory barriers,
> > > and perhaps some day it won't need to once again.
> > > 
> > > 							Thanx, Paul
> > >
> > 
> > Thanks!
> > Something like this will be enough?
> > 
> > diff --git a/include/linux/srcu.h b/include/linux/srcu.h
> > index c114614..9b058ee 100644
> > --- a/include/linux/srcu.h
> > +++ b/include/linux/srcu.h
> > @@ -237,4 +237,18 @@ static inline void srcu_read_unlock(struct srcu_struct *sp, int idx)
> >  	__srcu_read_unlock(sp, idx);
> >  }
> > 
> > +/**
> > + * smp_mb__after_srcu_read_unlock - ensure full ordering after srcu_read_unlock
> > + *
> > + * Converts the preceding srcu_read_unlock into a two-way memory barrier.
> > + *
> > + * Call this after srcu_read_unlock, to guarantee that all memory operations
> > + * that occur after smp_mb__after_srcu_read_unlock will appear to happen after
> > + * the preceding srcu_read_unlock.
> > + */
> > +static inline void smp_mb__after_srcu_read_unlock(void)
> > +{
> > +	/* __srcu_read_unlock has smp_mb() internally so nothing to do here. */
> > +}
> > +
> >  #endif
> 
> Yep, that should do it!
> 
> 							Thanx, Paul

BTW I'm wondering about the smb_mb within srcu_read_lock.
If we kept the index in the same memory with the buffer we
dereference, could we get rid of it and use a dependency barrier
instead? It does appear prominently in the profiles.
Thoughts?


-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ