lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <4547D760.9000200@cosmosbay.com>
Date:	Wed, 01 Nov 2006 00:08:16 +0100
From:	Eric Dumazet <dada1@...mosbay.com>
To:	Nick Piggin <nickpiggin@...oo.com.au>
Cc:	Ingo Molnar <mingo@...e.hu>, Andrew Morton <akpm@...l.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Jens Axboe <jens.axboe@...cle.com>
Subject: Re: [PATCH] splice : two smp_mb() can be omitted

Nick Piggin a écrit :
> Eric Dumazet wrote:
> 
>> On Tuesday 31 October 2006 10:40, Nick Piggin wrote:
>>
>>
>>> Uh, there is nothing that says mutex_unlock or any unlock
>>> functions contain an implicit smp_mb(). What is given is that the
>>> lock and unlock obey aquire and release memory ordering,
>>> respectively.
>>>
>>> a = x;
>>> xxx_unlock
>>> b = y;
>>>
>>> In this situation, the load of y can be executed before that of x.
>>> And some architectures will even do so (i386 can, because the
>>> unlock is an unprefixed store; ia64 can, because it uses a release
>>> barrier in the unlock).
>>>
>>
>> Hum... it seems your mutex_unlock() i386/x86_64 copy is not same as 
>> mine :)
>>
> 
> OK, replace xxx with mutex, and what I've said still holds true for ia64.
> 
>> Maybe we could document the fact that mutex_{lock|unlock}() has or has 
>> not an implicit smp_mb().
>>
> 
> It does not, none of the unlock functions ever have.
> 
>> If not, delete smp_mb() calls from include/asm-generic/mutex-dec.h
> 
> They should be deleted (and from mutex-xchg). NOT because there is no 
> need for
> a memory barrier, but because the atomic_alter_value_and_return_something
> functions always provide a barrier before and after the operation, as per
> Documentation/atomic_ops.txt
> 
> Again, lock / unlock operations require acquire / release consistency. 
> This is a
> memory ordering operation. It is not equivalent to smp_mb, though.

This thread just show how difficult it is to have consistent use of all this 
stuff in all kernel. Maybe it is just me ? Should I work on IA64 to have a 
chance to learn ?


For example, Documentation/atomic_ops.txt comments about atomic_inc_return() 
and atomic_dec_return() seems in contradiction with itself.

--------------------------

Unlike the above routines, it is required that explicit memory
barriers are performed before and after the operation.  It must be
done such that all memory operations before and after the atomic
operation calls are strongly ordered with respect to the atomic
operation itself.

-------------------------

When I read this, I understand we (the user of such functions) need to add 
smp_mb(). (That is, those functions wont do it themselves)

Then following text is :

----------------------------
For example, it should behave as if a smp_mb() call existed both
before and after the atomic operation.

--------------------------

Now I understand the reverse.


Time to sleep for sure :) And find a IA64 platform tomorrow morning :)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ