[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1407897625.4216.1.camel@buesod1.americas.hpqcorp.net>
Date: Tue, 12 Aug 2014 19:40:25 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: Manfred Spraul <manfred@...orfullife.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
David Howells <dhowells@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...hat.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Rik van Riel <riel@...hat.com>, 1vier1@....de
Subject: Re: [PATCH] ipc/sem.c: [RFC] memory barrier in sem_lock()
On Tue, 2014-08-12 at 21:43 +0200, Manfred Spraul wrote:
> sem_lock right now contains an smp_mb().
> I think smp_rmb() would be sufficient - and performance of semop() with rmb()
> is up to 10% faster. It would be a pairing of rmb() with spin_unlock().
>
> The race we must protect against is:
>
> sem->lock is free
> sma->complex_count = 0
> sma->sem_perm.lock held by thread B
>
> thread A:
>
> A: spin_lock(&sem->lock)
>
> B: sma->complex_count++; (now 1)
> B: spin_unlock(&sma->sem_perm.lock);
>
> A: spin_is_locked(&sma->sem_perm.lock);
> A: XXXXX which memory barrier is necessary?
> A: if (sma->complex_count == 0)
>
> Thread A must read the increased complex_count value, i.e. the read must
> not be reordered with the read of sem_perm.lock done by spin_is_locked().
>
> But that's it, there are no writes that must be ordered.
Looks right. You might also replace the current (useless) comment with
the patch's changelog, describing why the barrier is there.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists