lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1393095138.11497.5.camel@dabdike.int.hansenpartnership.com>
Date:	Sat, 22 Feb 2014 10:52:18 -0800
From:	James Bottomley <James.Bottomley@...senPartnership.com>
To:	Peter Hurley <peter@...leysoftware.com>
Cc:	Tejun Heo <tj@...nel.org>, laijs@...fujitsu.com,
	linux-kernel@...r.kernel.org,
	Stefan Richter <stefanr@...6.in-berlin.de>,
	linux1394-devel@...ts.sourceforge.net,
	Chris Boot <bootc@...tc.net>, linux-scsi@...r.kernel.org,
	target-devel@...r.kernel.org
Subject: Re: [PATCH 4/9] firewire: don't use PREPARE_DELAYED_WORK

On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
> On 02/22/2014 01:43 PM, James Bottomley wrote:
> >
> > On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
> >> On 02/21/2014 11:57 AM, Tejun Heo wrote:
> >>> Yo,
> >>>
> >>> On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
> >>>> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
> >>>> no mb__after unlock.
> >>>
> >>> We do have smp_mb__after_unlock_lock().
> >>>
> >>>> [ After thinking about it some, I don't think preventing speculative
> >>>>     writes before clearing PENDING if useful or necessary, so that's
> >>>>     why I'm suggesting only the rmb. ]
> >>>
> >>> But smp_mb__after_unlock_lock() would be cheaper on most popular
> >>> archs, I think.
> >>
> >> smp_mb__after_unlock_lock() is only for ordering memory operations
> >> between two spin-locked sections on either the same lock or by
> >> the same task/cpu. Like:
> >>
> >>      i = 1
> >>      spin_unlock(lock1)
> >>      spin_lock(lock2)
> >>      smp_mb__after_unlock_lock()
> >>      j = 1
> >>
> >> This guarantees that the store to j happens after the store to i.
> >> Without it, a cpu can
> >>
> >>      spin_lock(lock2)
> >>      j = 1
> >>      i = 1
> >>      spin_unlock(lock1)
> >
> > No the CPU cannot.  If the CPU were allowed to reorder locking
> > sequences, we'd get speculation induced ABBA deadlocks.  The rules are
> > quite simple: loads and stores cannot speculate out of critical
> > sections.
> 
> If you look carefully, you'll notice that the stores have not been
> moved from their respective critical sections; simply that the two
> critical sections overlap because they use different locks.

You didn't look carefully enough at what I wrote.  You may not reorder
critical sections so they overlap regardless of whether the locks are
independent or not.  This is because we'd get ABBA deadlocks due to
speculation (A represents lock1 and B lock 2 in your example).

James


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ