lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <464AEA47.7030600@gmail.com>
Date:	Wed, 16 May 2007 13:25:59 +0200
From:	Tejun Heo <htejun@...il.com>
To:	Oleg Nesterov <oleg@...sign.ru>
CC:	Andrew Morton <akpm@...ux-foundation.org>,
	David Chinner <dgc@....com>,
	David Howells <dhowells@...hat.com>,
	Gautham Shenoy <ego@...ibm.com>,
	Jarek Poplawski <jarkao2@...pl>, Ingo Molnar <mingo@...e.hu>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] make cancel_rearming_delayed_work() reliable

Hello, Oleg.

Oleg Nesterov wrote:
>>> Yes? I need to think more about this.
>> I don't think the test for PENDING should be atomic too.  cwq pointer
>> and VALID is one package.  PENDING lives its own life as a atomic bit
>> switch.
> 
> Yes sure, it should not be atomic. But (VALID && !PENDING) == BUG, so we
> can't just "kill" PENDING form the check above.

We can also mask it out, which is about the same as what we're currently
doing.  :-)

>>>      Memory operations that occur before a LOCK operation may appear to happen
>>>      after it completes.
>> Which means that spin_lock() isn't a write barrier.
> 
> yes, it is not very clear which "Memory operations" memory-barriers.txt
> describes.
> 
>>                                                       lock is read
>> barrier, unlock is write barrier.
> 
> (To avoid a possible confusion: I am not arguing, I am trying to understand,
>  and please also note "in _theory_" above)
> 
> Is it so? Shoudn't we document this if it is true?
> 
>>                                    Otherwise, locking doesn't make much
>> sense.
> 
> Why? Could you please give a code example we have which relies on this?

Let's say there's a shared data structure protected by a spinlock and
two threads are accessing it.

1. thr1 locks spin
2. thr1 updates data structure
3. thr1 unlocks spin
4. thr2 locks spin
5. thr2 accesses data structure
6. thr2 unlocks spin

If spin_unlock is not a write barrier and spin_lock is not a read
barrier, nothing guarantees memory accesses from step#5 will see the
changes made in step#2.  Memory fetch can occur during updates in step#2
or even before that.

In _theory_, if you have, say, partial/blocked memory barrier thing
which acts as a barrier to certain range of memory accesses, in this
case memory accesses made while holding spinlock, they don't have to be
real memory barriers but AFAIK there isn't any.  I don't think it's
practical to worry about such thing at this point.  It's way too
theoretical.  But, yes, it sure needs documentation.

>>     If we're going the barrier way, I think we're better off with
>> explicit smp_wmb().  It's only barrier() on x86/64.
> 
> Yes. But note that we don't have any reason to do set_wq_data() under
> cwq->lock (this is also true for wake_up(->more_work) btw), so it makes
> sense to do this change anyway. And "wmb + spin_lock" looks a bit strange,
> I _suspect_ spin_lock() means a full barrier on most platforms.
> 
> Could you also look at
> 	http://marc.info/?t=116275561700001&r=1
> 
> and, in particular,
> 	http://marc.info/?l=linux-kernel&m=116281136122456

This is because spin_lock() isn't a write barrier, right?  I totally
agree with you there.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ