[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160716012712.GB2271@linux-80c1.suse>
Date: Fri, 15 Jul 2016 18:27:12 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: Manfred Spraul <manfred@...orfullife.com>
Cc: "H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...e.hu>, 1vier1@....de,
felixh@...ormatik.uni-bremen.de, stable@...r.kernel.org
Subject: Re: [PATCH 1/2] ipc/sem.c: Fix complex_count vs. simple op race
On Wed, 13 Jul 2016, Manfred Spraul wrote:
>-static void sem_wait_array(struct sem_array *sma)
>+static void complexmode_enter(struct sem_array *sma)
> {
> int i;
> struct sem *sem;
>
>- if (sma->complex_count) {
>- /* The thread that increased sma->complex_count waited on
>- * all sem->lock locks. Thus we don't need to wait again.
>- */
>+ if (sma->complex_mode) {
>+ /* We are already in complex_mode. Nothing to do */
> return;
> }
>+ WRITE_ONCE(sma->complex_mode, true);
So we can actually save those READ/WRITE_ONCE calls for complex_mode as it's
a bool and therefore tearing is not an issue.
>+
>+ /* We need a full barrier:
>+ * The write to complex_mode must be visible
>+ * before we read the first sem->lock spinlock state.
>+ */
>+ smp_mb();
smp_store_mb()?
> /*
>@@ -300,56 +338,40 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
> /* Complex operation - acquire a full lock */
> ipc_lock_object(&sma->sem_perm);
>
>- /* And wait until all simple ops that are processed
>- * right now have dropped their locks.
>- */
>- sem_wait_array(sma);
>+ /* Prevent parallel simple ops */
>+ complexmode_enter(sma);
> return -1;
nit and unrelated: we should probably use some better label here than a raw
-1 (although I don't see it changing, just for nicer reading), ie: SEM_OBJECT_LOCKED
Thanks,
Davidlohr
Powered by blists - more mailing lists