[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFyv3u4uuAamCONMBUmB+rh0JsY4HdOj+UKwtTk0Wzf7Ag@mail.gmail.com>
Date: Wed, 20 Mar 2013 13:49:53 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rik van Riel <riel@...riel.com>
Cc: Davidlohr Bueso <davidlohr.bueso@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>, hhuang@...hat.com,
"Low, Jason" <jason.low2@...com>,
Michel Lespinasse <walken@...gle.com>,
Larry Woodman <lwoodman@...hat.com>,
"Vinod, Chegu" <chegu_vinod@...com>
Subject: Re: ipc,sem: sysv semaphore scalability
On Wed, Mar 20, 2013 at 12:55 PM, Rik van Riel <riel@...riel.com> wrote:
>
> This series makes the sysv semaphore code more scalable,
> by reducing the time the semaphore lock is held, and making
> the locking more scalable for semaphore arrays with multiple
> semaphores.
The series looks sane to me, and I like how each individual step is
pretty small and makes sense.
It *would* be lovely to see this run with the actual Swingbench
numbers. The microbenchmark always looked much nicer. Do the
additional multi-semaphore scalability patches on top of Davidlohr's
patches help with the swingbench issue, or are we still totally
swamped by the ipc lock there?
Maybe there were already numbers for that, but the last swingbench
numbers I can actually recall was from before the finer-grained
locking..
And obviously, getting this tested so that there aren't any more
missed wakeups etc would be lovely. I'm assuming the plan is that this
all goes through Andrew? Do we have big semop users who could test it
on real loads? Considering that I *suspect* the main users are things
like Oracle etc, I'd assume that there's some RH lab or partner or
similar that is interested in making sure this not only helps, but
also that it doesn't break anything ;)
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists