lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5308F0E2.3030804@hurleysoftware.com>
Date:	Sat, 22 Feb 2014 13:48:02 -0500
From:	Peter Hurley <peter@...leysoftware.com>
To:	unlisted-recipients:; (no To-header on input)
CC:	Tejun Heo <tj@...nel.org>, laijs@...fujitsu.com,
	linux-kernel@...r.kernel.org,
	Stefan Richter <stefanr@...6.in-berlin.de>,
	linux1394-devel@...ts.sourceforge.net,
	Chris Boot <bootc@...tc.net>, linux-scsi@...r.kernel.org,
	target-devel@...r.kernel.org
Subject: Re: [PATCH 4/9] firewire: don't use PREPARE_DELAYED_WORK

On 02/22/2014 01:43 PM, James Bottomley wrote:
>
> On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
>> On 02/21/2014 11:57 AM, Tejun Heo wrote:
>>> Yo,
>>>
>>> On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
>>>> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
>>>> no mb__after unlock.
>>>
>>> We do have smp_mb__after_unlock_lock().
>>>
>>>> [ After thinking about it some, I don't think preventing speculative
>>>>     writes before clearing PENDING if useful or necessary, so that's
>>>>     why I'm suggesting only the rmb. ]
>>>
>>> But smp_mb__after_unlock_lock() would be cheaper on most popular
>>> archs, I think.
>>
>> smp_mb__after_unlock_lock() is only for ordering memory operations
>> between two spin-locked sections on either the same lock or by
>> the same task/cpu. Like:
>>
>>      i = 1
>>      spin_unlock(lock1)
>>      spin_lock(lock2)
>>      smp_mb__after_unlock_lock()
>>      j = 1
>>
>> This guarantees that the store to j happens after the store to i.
>> Without it, a cpu can
>>
>>      spin_lock(lock2)
>>      j = 1
>>      i = 1
>>      spin_unlock(lock1)
>
> No the CPU cannot.  If the CPU were allowed to reorder locking
> sequences, we'd get speculation induced ABBA deadlocks.  The rules are
> quite simple: loads and stores cannot speculate out of critical
> sections.

If you look carefully, you'll notice that the stores have not been
moved from their respective critical sections; simply that the two
critical sections overlap because they use different locks.

Regards,
Peter Hurley

PS - Your reply address is unroutable.

> All architectures have barriers in place to prevent this ...
> I know from personal experience because the barriers on PARISC were
> originally too weak and we did get some speculation out of the critical
> sections, which was very nasty to debug.
>
> Stuff may speculate into critical sections from non-critical but never
> out of them and critical section boundaries may not reorder to cause an
> overlap.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ