[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Sun, 03 Mar 2013 06:53:10 +0100
From: Mike Galbraith <bitbucket@...ine.de>
To: Rik van Riel <riel@...hat.com>
Cc: Michel Lespinasse <walken@...gle.com>,
Davidlohr Bueso <davidlohr.bueso@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>,
linux-tip-commits@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, aquini@...hat.com,
Ingo Molnar <mingo@...nel.org>,
Larry Woodman <lwoodman@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 1/2] ipc: introduce obtaining a lockless ipc object
On Sat, 2013-03-02 at 21:18 -0500, Rik van Riel wrote:
> On 03/01/2013 11:32 PM, Michel Lespinasse wrote:
>
> > I think it may be nicer to take the rcu read lock at the call site
> > rather than in ipc_obtain_object(), to make the rcu read lock/unlock
> > sites pair up more nicely. Either that or make an inline
> > ipc_release_object function that pairs up with ipc_obtain_object() and
> > just does an rcu_read_unlock().
>
> I started on a patch series to untangle the IPC locking, so
> it will be a little more readable, and easier to maintain.
>
> It is a slower approach than Davidlohr's, as in, it will take
> a little longer to put a patch series together, but I hope it
> will be easier to debug...
>
> I hope to post a first iteration of the series by the middle
> of next week.
Goody, I'll be watching out for it. I have a big box rt user who uses
semaphores in their very tight constraint application. While they're
using them carefully, I saw a trace where contention cost vs jitter
budget was a bit too high for comfort, and semctl/semop collision.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists