[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5307E550.4040004@hurleysoftware.com>
Date: Fri, 21 Feb 2014 18:46:24 -0500
From: Peter Hurley <peter@...leysoftware.com>
To: Tejun Heo <tj@...nel.org>
CC: laijs@...fujitsu.com, linux-kernel@...r.kernel.org,
Stefan Richter <stefanr@...6.in-berlin.de>,
linux1394-devel@...ts.sourceforge.net,
Chris Boot <bootc@...tc.net>, linux-scsi@...r.kernel.org,
target-devel@...r.kernel.org
Subject: Re: [PATCH 4/9] firewire: don't use PREPARE_DELAYED_WORK
On 02/21/2014 06:18 PM, Tejun Heo wrote:
> On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
>> smp_mb__after_unlock_lock() is only for ordering memory operations
>> between two spin-locked sections on either the same lock or by
>> the same task/cpu. Like:
>>
>> i = 1
>> spin_unlock(lock1)
>> spin_lock(lock2)
>> smp_mb__after_unlock_lock()
>> j = 1
>>
>> This guarantees that the store to j happens after the store to i.
>> Without it, a cpu can
>>
>> spin_lock(lock2)
>> j = 1
>> i = 1
>> spin_unlock(lock1)
> ;
> Hmmm? I'm pretty sure that's a full barrier. Local processor is
> always in order (w.r.t. the compiler).
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
deliver.
Turns out that unlock + lock is not guaranteed by all archs to be
a full barrier. Thus the smb_mb__after_unlock_lock().
This is now all spelled out in memory-barriers.txt under the
sub-heading "IMPLICIT KERNEL MEMORY BARRIERS".
Regards,
Peter Hurley
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists