[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131102174645.GC3947@linux.vnet.ibm.com>
Date: Sat, 2 Nov 2013 10:46:45 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Victor Kaplansky <VICTORK@...ibm.com>,
Oleg Nesterov <oleg@...hat.com>,
Anton Blanchard <anton@...ba.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Frederic Weisbecker <fweisbec@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux PPC dev <linuxppc-dev@...abs.org>,
Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
Michael Ellerman <michael@...erman.id.au>,
Michael Neuling <mikey@...ling.org>
Subject: Re: perf events ring buffer memory barrier on powerpc
On Fri, Nov 01, 2013 at 05:11:29PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 30, 2013 at 11:40:15PM -0700, Paul E. McKenney wrote:
> > > void kbuf_write(int sz, void *buf)
> > > {
> > > u64 tail = ACCESS_ONCE(ubuf->tail); /* last location userspace read */
> > > u64 offset = kbuf->head; /* we already know where we last wrote */
> > > u64 head = offset + sz;
> > >
> > > if (!space(tail, offset, head)) {
> > > /* discard @buf */
> > > return;
> > > }
> > >
> > > /*
> > > * Ensure that if we see the userspace tail (ubuf->tail) such
> > > * that there is space to write @buf without overwriting data
> > > * userspace hasn't seen yet, we won't in fact store data before
> > > * that read completes.
> > > */
> > >
> > > smp_mb(); /* A, matches with D */
> > >
> > > write(kbuf->data + offset, buf, sz);
> > > kbuf->head = head % kbuf->size;
> > >
> > > /*
> > > * Ensure that we write all the @buf data before we update the
> > > * userspace visible ubuf->head pointer.
> > > */
> > > smp_wmb(); /* B, matches with C */
> > >
> > > ubuf->head = kbuf->head;
> > > }
>
> > > Now the whole crux of the question is if we need barrier A at all, since
> > > the STORES issued by the @buf writes are dependent on the ubuf->tail
> > > read.
> >
> > The dependency you are talking about is via the "if" statement?
> > Even C/C++11 is not required to respect control dependencies.
>
> But surely we must be able to make it so; otherwise you'd never be able
> to write:
>
> void *ptr = obj1;
>
> void foo(void)
> {
>
> /* create obj2, obj3 */
>
> smp_wmb(); /* ensure the objs are complete */
>
> /* expose either obj2 or obj3 */
> if (x)
> ptr = obj2;
> else
> ptr = obj3;
OK, the smp_wmb() orders the creation and the exposing. But the
compiler can do this:
ptr = obj3;
if (x)
ptr = obj2;
And that could momentarily expose obj3 to readers, and these readers
might be fatally disappointed by the free() below. If you instead said:
if (x)
ACCESS_ONCE(ptr) = obj2;
else
ACCESS_ONCE(ptr) = obj3;
then the general consensus appears to be that the compiler would not
be permitted to carry out the above optimization. Since you have
the smp_wmb(), readers that are properly ordered (e.g., smp_rmb() or
rcu_dereference()) would be prevented from seeing pre-initialization
state.
> /* free the unused one */
> if (x)
> free(obj3);
> else
> free(obj2);
> }
>
> Earlier you said that 'volatile' or '__atomic' avoids speculative
> writes; so would:
>
> volatile void *ptr = obj1;
>
> Make the compiler respect control dependencies again? If so, could we
> somehow mark that !space() condition volatile?
The compiler should, but the CPU is still free to ignore the control
dependencies in the general case.
We might be able to rely on weakly ordered hardware refraining
from speculating stores, but not sure that this applies across all
architectures of interest. We definitely can -not- rely on weakly
ordered hardware refraining from speculating loads.
> Currently the above would be considered a valid pattern. But you're
> saying its not because the compiler is free to expose both obj2 and obj3
> (for however short a time) and thus the free of the 'unused' object is
> incorrect and can cause use-after-free.
Yes, it is definitely unsafe and invalid in absence of ACCESS_ONCE().
> In fact; how can we be sure that:
>
> void *ptr = NULL;
>
> void bar(void)
> {
> void *obj = malloc(...);
>
> /* fill obj */
>
> if (!err)
> rcu_assign_pointer(ptr, obj);
> else
> free(obj);
> }
>
> Does not get 'optimized' into:
>
> void bar(void)
> {
> void *obj = malloc(...);
> void *old_ptr = ptr;
>
> /* fill obj */
>
> rcu_assign_pointer(ptr, obj);
> if (err) { /* because runtime profile data says this is unlikely */
> ptr = old_ptr;
> free(obj);
> }
> }
In this particular case, the barrier() implied by the smp_wmb() in
rcu_assign_pointer() will prevent this "optimization". However, other
"optimizations" are the reason why I am working to introduce ACCESS_ONCE()
into rcu_assign_pointer.
> We _MUST_ be able to rely on control flow, otherwise me might as well
> all go back to writing kernels in asm.
It isn't -that- bad! ;-)
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists