lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160516172222.GE3528@linux.vnet.ibm.com>
Date:	Mon, 16 May 2016 10:22:22 -0700
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Peter Hurley <peter@...leysoftware.com>
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Waiman Long <Waiman.Long@....com>,
	Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
	Davidlohr Bueso <dave@...olabs.net>,
	Jason Low <jason.low2@...com>,
	Dave Chinner <david@...morbit.com>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>, kcc@...gle.com,
	dvyukov@...gle.com, dhowells@...hat.com
Subject: Re: [PATCH v2] locking/rwsem: Add reader-owned state to the owner
 field

On Mon, May 16, 2016 at 07:17:42AM -0700, Peter Hurley wrote:
> On 05/16/2016 05:17 AM, Paul E. McKenney wrote:
> > On Mon, May 16, 2016 at 01:09:48PM +0200, Peter Zijlstra wrote:
> >> On Fri, May 13, 2016 at 10:58:05AM -0700, Peter Hurley wrote:
> >>>> Note that barrier() and READ_ONCE() have overlapping but not identical
> >>>> results and the combined use actually makes sense here.
> >>>>
> >>>> Yes, a barrier() anywhere in the loop will force a reload of the
> >>>> variable, _however_ it doesn't force that reload to not suffer from
> >>>> load tearing.
> >>>>
> >>>> Using volatile also forces a reload, but also ensures the load cannot
> >>>> be torn IFF it is of machine word side and naturally aligned.
> >>>>
> >>>> So while the READ_ONCE() here is pointless for forcing the reload;
> >>>> that's already ensured, we still need to make sure the load isn't torn.
> >>>
> >>> If load tearing a naturally aligned pointer is a real code generation
> >>> possibility then the rcu list code is broken too (which loads ->next
> >>> directly; cf. list_for_each_entry_rcu() & list_for_each_entry_lockless()).
> >>>
> >>> For 4.4, Paul added READ_ONCE() checks for list_empty() et al, but iirc
> >>> that had to do with control dependencies and not load tearing.
> >>
> >> Well, Paul is the one who started the whole load/store tearing thing, so
> >> I suppose he knows what he's doing.
> > 
> > That had to do with suppressing false positives for one of Dmitry
> > Vjukov's concurrency checkers.  I suspect that Peter Hurley is right
> > that continued use of that checker would identify other places needing
> > READ_ONCE(), but from what I understand that is on hold pending a formal
> > definition of the Linux-kernel memory model.  (KCC and Dmitry (CCed)
> > can correct my if I am confused on this point.)
> > 
> >> That said; its a fairly recent as things go so lots of code hasn't been
> >> updated yet, and its also a very unlikely thing for a compiler to do;
> >> since it mostly doesn't make sense to emit multiple instructions where
> >> one will do, so its not a very high priority thing either.
> >>
> >> But from what I understand, the compiler is free to emit all kinds of
> >> nonsense for !volatile loads/stores.
> > 
> > That is quite true.  :-/
> > 
> >>> OTOH, this patch might actually produce store-tearing:
> >>>
> >>> +static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
> >>> +{
> >>> +	/*
> >>> +	 * We check the owner value first to make sure that we will only
> >>> +	 * do a write to the rwsem cacheline when it is really necessary
> >>> +	 * to minimize cacheline contention.
> >>> +	 */
> >>> +	if (sem->owner != RWSEM_READER_OWNED)
> >>> +		sem->owner = RWSEM_READER_OWNED;
> >>> +}
> >>
> >> Correct; which is why we should always use {READ,WRITE}_ONCE() for
> >> anything that is used locklessly.
> > 
> > Completely agreed.  Improve readability of code by flagging lockless
> > shared-memory accesses, help checkers better find bugs, and prevent the
> > occasional compiler mischief!
> 
> I think this would be a mistake for 3 reasons:
> 
> 1. If READ_ONCE()/WRITE_ONCE() is necessary to prevent load/store tearing
>    of any normally-atomic type (char/int/long/void*), then _every_ access
>    would require READ_ONCE()/WRITE_ONCE(), thus eliminating any possibility
>    of compiler optimization (eg. eliding redundant loads) where it would
>    otherwise be possible.

The point about eliding redundant loads is a good one, at least in those
cases where it is a reasonable optimization.  Should we ever get to a
point where we no longer use pre-C11 compilers, those use cases could
potentially use memory_order_relaxed loads.  Preferably wrappered in
something that can be typed with fewer characters.  And it could of course
lead to an interesting discussion of what use cases would be required
to justify this change, but what else is new?

> 2. Makes a mess of otherwise readable code.
> 
> 3. Error-prone; ie., easy to overlook in review.

But #2 and #3 are at odds with each other.  It is all too easy to miss a
critically important load or store that has not been flagged in some way.
So #2's readable code can easily be problematic, as the concurrency is
hidden from both the compiler and the poor developer reading the code.

> There is no practical difference between _always_ using READ_ONCE()/WRITE_ONCE()
> (to prevent tearing) and declaring the field volatile.

Actually, yes there is a difference.  If you hold the update-side lock,
you don't have to use READ_ONCE() when reading the variable.  If you
have further excluded readers (for example, at initialization time or
at teardown time), then you don't have to use either READ_ONCE() or
WRITE_ONCE().

> So we've come full-circle from volatile-considered-harmful.

Not really.  We are (hopefully) using volatile for jobs that it can do.
In contrast, in the past people were expecting it to do more than it
reasonably can do.

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ