[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1363914891.31240.62.camel@buesod1.americas.hpqcorp.net>
Date: Thu, 21 Mar 2013 18:14:51 -0700
From: Davidlohr Bueso <davidlohr.bueso@...com>
To: Rik van Riel <riel@...riel.com>
Cc: torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, hhuang@...hat.com, jason.low2@...com,
walken@...gle.com, lwoodman@...hat.com, chegu_vinod@...com,
Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 7/7] ipc,sem: fine grained locking for semtimedop
On Wed, 2013-03-20 at 15:55 -0400, Rik van Riel wrote:
> Introduce finer grained locking for semtimedop, to handle the
> common case of a program wanting to manipulate one semaphore
> from an array with multiple semaphores.
>
> If the call is a semop manipulating just one semaphore in
> an array with multiple semaphores, only take the lock for
> that semaphore itself.
>
> If the call needs to manipulate multiple semaphores, or
> another caller is in a transaction that manipulates multiple
> semaphores, the sem_array lock is taken, as well as all the
> locks for the individual semaphores.
>
> On a 24 CPU system, performance numbers with the semop-multi
> test with N threads and N semaphores, look like this:
>
> vanilla Davidlohr's Davidlohr's + Davidlohr's +
> threads patches rwlock patches v3 patches
> 10 610652 726325 1783589 2142206
> 20 341570 365699 1520453 1977878
> 30 288102 307037 1498167 2037995
> 40 290714 305955 1612665 2256484
> 50 288620 312890 1733453 2650292
> 60 289987 306043 1649360 2388008
> 70 291298 306347 1723167 2717486
> 80 290948 305662 1729545 2763582
> 90 290996 306680 1736021 2757524
> 100 292243 306700 1773700 3059159
>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
Acked-by: Davidlohr Bueso <davidlohr.bueso@...com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists