lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 30 Oct 2013 18:44:02 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Victor Kaplansky <VICTORK@...ibm.com>
Cc:	Anton Blanchard <anton@...ba.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux PPC dev <linuxppc-dev@...abs.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Michael Ellerman <michael@...erman.id.au>,
	Michael Neuling <mikey@...ling.org>,
	Oleg Nesterov <oleg@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: perf events ring buffer memory barrier on powerpc

On Wed, Oct 30, 2013 at 07:14:29PM +0200, Victor Kaplansky wrote:
> We need a complete rmb() here IMO. I think there is a fundamental
> difference between load and stores in this aspect. Load are allowed to
> be hoisted by compiler or executed speculatively by HW. To prevent
> load "*(ubuf->data + tail)" to be hoisted beyond "ubuf->head" load you
> would need something like this:

Indeed, we could compute and load ->data + tail the moment we've
completed the tail load but before we've completed the head load and
done the comparison.

So yes, full rmb() it is.

> void
> ubuf_read(void)
> {
>         u64 head, tail;
> 
>         tail = ubuf->tail;
>         head = ACCESS_ONCE(ubuf->head);
> 
>         /*
>          * Ensure we read the buffer boundaries before the actual buffer
>          * data...
>          */
> 
>         while (tail != head) {
> 		    smp_read_barrier_depends();         /* for Alpha */
>                 obj = *(ubuf->data + head - 128);
>                 /* process obj */
>                 tail += obj->size;
>                 tail %= ubuf->size;
>         }
> 
>         /*
>          * Ensure all data reads are complete before we issue the
>          * ubuf->tail update; once that update hits, kbuf_write() can
>          * observe and overwrite data.
>          */
>         smp_mb();               /* D, matches with A */
> 
>         ubuf->tail = tail;
> }
> 
> (note that "head" is part of address calculation of obj load now).

Right, explicit dependent loads; I was hoping the conditional in between
might be enough, but as argued above it is not. The above cannot work in
our case though, we must use tail to find the obj since we have variable
size objects.

> But, even in this demo example some "smp_read_barrier_depends()" before
> "obj = *(ubuf->data + head - 100);" is required for architectures
> like Alpha. Though, on more sane architectures "smp_read_barrier_depends()"
> will be translated to nothing.

Sure.. I know all about that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ