lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 30 Jun 2016 21:28:43 +0200
From:	Manfred Spraul <manfred@...orfullife.com>
To:	Davidlohr Bueso <dave@...olabs.net>
Cc:	Stephen Rothwell <sfr@...b.auug.org.au>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, linux-next@...r.kernel.org,
	1vier1@....de, felixh@...ormatik.uni-bremen.de,
	stable@...r.kernel.org
Subject: Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

On 06/28/2016 07:27 AM, Davidlohr Bueso wrote:
> On Thu, 23 Jun 2016, Manfred Spraul wrote:
>
>> What I'm not sure yet is if smp_load_acquire() is sufficient:
>>
>> Thread A:
>>>       if (!READ_ONCE(sma->complex_mode)) {
>> The code is test_and_test, no barrier requirements for first test
>
> Yeah, it would just make us take the big lock unnecessarily, nothing 
> fatal
> and I agree its probably worth the optimization. It still might be worth
> commenting.
>
I'll extend the comment: "no locking and no memory barrier"
>>>                /*
>>>                 * It appears that no complex operation is around.
>>>                 * Acquire the per-semaphore lock.
>>>                 */
>>>                spin_lock(&sem->lock);
>>>
>>>                if (!smp_load_acquire(&sma->complex_mode)) {
>>>                        /* fast path successful! */
>>>                        return sops->sem_num;
>>>                }
>>>                spin_unlock(&sem->lock);
>>>        }
>>
>> Thread B:
>>>       WRITE_ONCE(sma->complex_mode, true);
>>>
>>>        /* We need a full barrier:
>>>         * The write to complex_mode must be visible
>>>         * before we read the first sem->lock spinlock state.
>>>         */
>>>        smp_mb();
>>>
>>>        for (i = 0; i < sma->sem_nsems; i++) {
>>>                sem = sma->sem_base + i;
>>>                spin_unlock_wait(&sem->lock);
>>>        }
>>
>> If thread A is allowed to issue read_spinlock;read complex_mode;write 
>> spinlock, then thread B would not notice that thread A is in the 
>> critical section
>
> Are you referring to the sem->lock word not being visibly locked 
> before we
> read complex_mode (for the second time)? This issue was fixed in 
> 2c610022711
> (locking/qspinlock: Fix spin_unlock_wait() some more). So 
> smp_load_acquire
> should be enough afaict, or are you referring to something else?
>
You are right, I didn't read this patch fully.
If I understand it right, it means that spin_lock() is both an acquire 
and a release - for qspinlocks.

It this valid for all spinlock implementations, for all architectures?
Otherwise: How can I detect in generic code if I can rely on a release 
inside spin_lock()?

--
     Manfred

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ