lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 Nov 2014 13:12:32 -0800
From:	Alexander Duyck <alexander.h.duyck@...hat.com>
To:	Will Deacon <will.deacon@....com>,
	"alexander.duyck@...il.com" <alexander.duyck@...il.com>
CC:	"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Michael Neuling <mikey@...ling.org>,
	Tony Luck <tony.luck@...el.com>,
	Mathieu Desnoyers <mathieu.desnoyers@...ymtl.ca>,
	Peter Zijlstra <peterz@...radead.org>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Oleg Nesterov <oleg@...hat.com>,
	Michael Ellerman <michael@...erman.id.au>,
	Geert Uytterhoeven <geert@...ux-m68k.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Russell King <linux@....linux.org.uk>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH] arch: Introduce read_acquire()

On 11/11/2014 11:47 AM, Will Deacon wrote:
> Hello,
>
> On Tue, Nov 11, 2014 at 06:57:05PM +0000, alexander.duyck@...il.com wrote:
>> From: Alexander Duyck <alexander.h.duyck@...hat.com>
>>
>> In the case of device drivers it is common to utilize receive descriptors
>> in which a single field is used to determine if the descriptor is currently
>> in the possession of the device or the CPU.  In order to prevent any other
>> fields from being read a rmb() is used resulting in something like code
>> snippet from ixgbe_main.c:
>>
>> 	if (!ixgbe_test_staterr(rx_desc, IXGBE_RXD_STAT_DD))
>> 		break;
>>
>> 	/*
>> 	 * This memory barrier is needed to keep us from reading
>> 	 * any other fields out of the rx_desc until we know the
>> 	 * RXD_STAT_DD bit is set
>> 	 */
>> 	rmb();
>>
>> On reviewing the documentation and code for smp_load_acquire() it occured
>> to me that implementing something similar for CPU <-> device interraction
>> would be worth while.  This commit provides just the load/read side of this
>> in the form of read_acquire().  This new primative orders the specified
>> read against any subsequent reads.  As a result we can reduce the above
>> code snippet down to:
>>
>> 	/* This memory barrier is needed to keep us from reading
>> 	 * any other fields out of the rx_desc until we know the
>> 	 * RXD_STAT_DD bit is set
>> 	 */
>> 	if (!(read_acquire(&rx_desc->wb.upper.status_error) &
> Minor nit on naming, but load_acquire would match what we do with barriers,
> where you simply drop the smp_ prefix if you want the thing to work on UP
> systems too.

The problem is this is slightly different, load_acquire in my mind would 
use a mb() call, I only use a rmb().  That is why I chose read_acquire 
as the name.

>> 	      cpu_to_le32(IXGBE_RXD_STAT_DD)))
>> 		break;
> I'm not familiar with the driver in question, but how are the descriptors
> mapped? Is the read barrier here purely limiting re-ordering of normal
> memory accesses by the CPU? If so, isn't there also scope for store_release
> when updating, e.g. next_to_watch in the same driver?

So the driver in question is using descriptor rings allocated via 
dma_alloc_coherent.    The device is notified that new descriptors are 
present via a memory mapped I/O register, then the device will read the 
descriptor via a DMA operation and then write it back with another DMA 
operation and the process of doing so it will set the IXGBE_RXD_STAT_DD bit.

The problem with the store_release logic is that it would need to key 
off of a write to memory mapped I/O.  The idea had crossed my mind, but 
I wasn't confident I had a good enough understanding of things to try 
and deal with memory ordering for cacheable and uncachable memory in the 
same call.  I would have to do some more research to see if something 
like that is even possible as I suspect some of the architectures may 
not support something like that.

> We also need to understand how this plays out with
> smp_mb__after_unlock_lock, which is currently *only* implemented by PowerPC.
> If we end up having a similar mess to mmiowb, where PowerPC both implements
> the barrier *and* plays tricks in its spin_unlock code, then everybody
> loses because we'd end up with release doing the right thing anyway.

PowerPC is not much of a risk in this patch.  The implementation I did 
just fell back to a rmb().

The architectures I need to sort out are arm, x86, sparc, ia64, and s390 
as they are the only ones that tried to make use of the smp_load_acquire 
logic.

> Peter and I spoke with Paul at LPC about strengthening
> smp_load_acquire/smp_store_release so that release->acquire ordering is
> maintained, which would allow us to drop smp_mb__after_unlock_lock
> altogether. That's stronger than acquire/release in C11, but I think it's
> an awful lot easier to use, particularly if device drivers are going to
> start using these primitives.
>
> Thoughts?
>
> Will

I generally want just enough of a barrier in place to keep things 
working properly without costing much in terms of CPU time.  If you can 
come up with a generic load_acquire/store_release that could take the 
place of this function I am fine with that as long as it would function 
at the same level of performance.

Thanks,

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ