[<prev] [next>] [day] [month] [year] [list]
Message-Id: <200908160054.n7G0sfF6025003@mail.q-ag.de>
Date: Sun, 16 Aug 2009 01:32:15 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: linux-kernel@...r.kernel.org
Cc: Pierre Peiffer <peifferp@...il.com>
Subject: [PATCH 4/4] ipc/sem.c: optimize single sops when semval is zero
If multiple simple decrements on the same semaphore are pending, then the
current code scans all decrement operations, even if the semaphore value is
already 0.
The patch optimizes that: if the semaphore value is 0, then there is no need
to scan the q->alter entries.
Note that this is a common case: It happens if 100 decrements by one are
pending and now an increment by one increases the semaphore value from 0 to 1.
Without this patch, all 100 entries are scanned. With the patch, only
one entry is scanned, then woken up. Then the new rule triggers and the
scanning is aborted, without looking at the remaining 99 tasks.
With this patch, single sop increment/decrement by 1 are now O(1).
(same as with Nick's patch)
Signed-off-by: Manfred Spraul <manfred@...orfullife.com>
---
ipc/sem.c | 11 +++++++++++
1 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index 66192c2..aab39fb 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -473,6 +473,17 @@ again:
q = (struct sem_queue *)((char *)walk - offset);
walk = walk->next;
+ /* If we are scanning the single sop, per-semaphore list of
+ * one semaphore and that semaphore is 0, then it is not
+ * necessary to scan the "alter" entries: simple increments
+ * that affect only one entry succeed immediately and cannot
+ * be in the per semaphore pending queue, and decrements
+ * cannot be successful if the value is already 0.
+ */
+ if (semnum != -1 && sma->sem_base[semnum].semval == 0 &&
+ q->alter)
+ break;
+
error = try_atomic_semop(sma, q->sops, q->nsops,
q->undo, q->pid);
--
1.6.2.5
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists