[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1369434096.2138.24.camel@buesod1.americas.hpqcorp.net>
Date: Fri, 24 May 2013 15:21:36 -0700
From: Davidlohr Bueso <davidlohr.bueso@...com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: torvalds@...ux-foundation.org, riel@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 04/11] ipc: move locking out of ipcctl_pre_down_nolock
On Fri, 2013-05-24 at 13:16 -0700, Andrew Morton wrote:
> On Wed, 15 May 2013 18:08:03 -0700 Davidlohr Bueso <davidlohr.bueso@...com> wrote:
>
> > This function currently acquires both the rw_mutex and the rcu lock on
> > successful lookups, leaving the callers to explicitly unlock them, creating
> > another two level locking situation.
> >
> > Make the callers (including those that still use ipcctl_pre_down()) explicitly
> > lock and unlock the rwsem and rcu lock.
> >
> > ...
> >
> > @@ -409,31 +409,38 @@ static int msgctl_down(struct ipc_namespace *ns, int msqid, int cmd,
> > return -EFAULT;
> > }
> >
> > + down_write(&msg_ids(ns).rw_mutex);
> > + rcu_read_lock();
> > +
> > ipcp = ipcctl_pre_down(ns, &msg_ids(ns), msqid, cmd,
> > &msqid64.msg_perm, msqid64.msg_qbytes);
> > - if (IS_ERR(ipcp))
> > - return PTR_ERR(ipcp);
> > + if (IS_ERR(ipcp)) {
> > + err = PTR_ERR(ipcp);
> > + /* the ipc lock is not held upon failure */
>
> Terms like "the ipc lock" are unnecessarily vague. It's better to
> identify the lock by name, eg msg_queue.q_perm.lock.
Ok, I can send a patch to rephrase that to perm.lock when I send the shm
patchset (which will be very similar to this one).
>
> Where should readers go to understand the overall locking scheme? A
> description of the overall object hierarchy and the role which the
> various locks play?
That can be done, how about something like
Documentation/ipc-locking.txt?
>
> Have you done any performance testing of this patchset? Just from
> squinting at it, I'd expect the effects to be small...
>
Right, I don't expect much performance benefits. (a) unlike sems, I
haven't seen mqueues ever show up as any source of contention, and (b) I
think sysv mqueues have mostly been replaced by posix ones...
For testing, I did run these patches with ipccmd
(http://code.google.com/p/ipcmd/), pgbench, aim7 and Oracle on large
machines - no regressions but nothing new in terms of performance.
I suspect that shm could have a little more impact, but haven't looked
too much into it.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists