lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160628052717.GA13793@linux-80c1.suse>
Date:	Mon, 27 Jun 2016 22:27:17 -0700
From:	Davidlohr Bueso <dave@...olabs.net>
To:	Manfred Spraul <manfred@...orfullife.com>
Cc:	Stephen Rothwell <sfr@...b.auug.org.au>,
	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...e.hu>, "H. Peter Anvin" <hpa@...or.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>, linux-next@...r.kernel.org,
	1vier1@....de, felixh@...ormatik.uni-bremen.de,
	stable@...r.kernel.org
Subject: Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race

On Thu, 23 Jun 2016, Manfred Spraul wrote:

>What I'm not sure yet is if smp_load_acquire() is sufficient:
>
>Thread A:
>>       if (!READ_ONCE(sma->complex_mode)) {
>The code is test_and_test, no barrier requirements for first test

Yeah, it would just make us take the big lock unnecessarily, nothing fatal
and I agree its probably worth the optimization. It still might be worth
commenting.

>>                /*
>>                 * It appears that no complex operation is around.
>>                 * Acquire the per-semaphore lock.
>>                 */
>>                spin_lock(&sem->lock);
>>
>>                if (!smp_load_acquire(&sma->complex_mode)) {
>>                        /* fast path successful! */
>>                        return sops->sem_num;
>>                }
>>                spin_unlock(&sem->lock);
>>        }
>
>Thread B:
>>       WRITE_ONCE(sma->complex_mode, true);
>>
>>        /* We need a full barrier:
>>         * The write to complex_mode must be visible
>>         * before we read the first sem->lock spinlock state.
>>         */
>>        smp_mb();
>>
>>        for (i = 0; i < sma->sem_nsems; i++) {
>>                sem = sma->sem_base + i;
>>                spin_unlock_wait(&sem->lock);
>>        }
>
>If thread A is allowed to issue read_spinlock;read complex_mode;write 
>spinlock, then thread B would not notice that thread A is in the 
>critical section

Are you referring to the sem->lock word not being visibly locked before we
read complex_mode (for the second time)? This issue was fixed in 2c610022711
(locking/qspinlock: Fix spin_unlock_wait() some more). So smp_load_acquire
should be enough afaict, or are you referring to something else?

Thanks,
Davidlohr

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ