[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1369359930.1770.2.camel@buesod1.americas.hpqcorp.net>
Date: Thu, 23 May 2013 18:45:30 -0700
From: Davidlohr Bueso <davidlohr.bueso@...com>
To: akpm@...ux-foundation.org
Cc: torvalds@...ux-foundation.org, riel@...hat.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/11] sysv mqueue: do not hold the ipc lock
unnecessarily
ping, Andrew?
On Wed, 2013-05-15 at 18:07 -0700, Davidlohr Bueso wrote:
> This patchset continues the work that began in the sysv ipc semaphore scaling
> series: https://lkml.org/lkml/2013/3/20/546
>
> Just like semaphores used to be, sysv shared memory and msg queues also abuse the ipc
> lock, unnecessarily holding it for operations such as permission and security checks. This
> patchset mostly deals with mqueues, and while shared mem can be done in a very similar way,
> I want to get these patches out in the open first. It also does some pending cleanups,
> mostly focused on the two level locking we have in ipc code, taking care of ipc_addid()
> and ipcctl_pre_down_nolock() - yes there are still functions that need to be updated as well.
>
> I have tried to split each patch to be as readable and specific as possible, specially when
> shortening the critical regions, going one function at a time.
>
> Patch 1 moves the locking to be explicitly done by the callers of ipc_addid.
> It updates msg, sem and shm.
>
> Patches 2-3: are just wrappers around the ipc lock, initially suggested by Linus.
>
> Patch 4 is similar to patch 1, moving the rcu and rw_mutex locking out of
> ipcctl_pre_down_nolock so that the callers explicitly deals with them. It updates msg, sem
> and shm.
>
> Patch 5 shortens the critical region in msgctl_down(), using the lockless
> ipcctl_pre_down() function and only acquiring the ipc lock for RMID and SET commands.
>
> Patch 6 simply moves the what-should-be lockless logic of *_INFO and *_STAT commands
> out of msgctl() into a new function, msgctl_lockless().
>
> Patch 7 introduces the necessary wrappers around ipc_obtain_object[_check]()
> that will later enable us to separately lookup the ipc object without holding the lock.
>
> Patch 8 updates the previously added msgctl_nolock() to actually be lockless, reducing
> the critical region for the STAT commands.
>
> Patch 9 redoes the locking for msgsend().
>
> Patch 10 redoes the locking for msgrcv().
>
> Patch 11 removes the now unused msg_lock and msg_lock_check functions, replaced by
> a smarter combination of rcu, ipc_obtain_object and ipc_object_lock.
>
> Davidlohr Bueso (11):
> ipc: move rcu lock out of ipc_addid
> ipc: introduce ipc object locking helpers
> ipc: close open coded spin lock calls
> ipc: move locking out of ipcctl_pre_down_nolock
> ipc,msg: shorten critical region in semctl_down
> ipc,msg: introduce msgctl_nolock
> ipc,msg: introduce lockless functions to obtain the ipc object
> ipc,msg: make msgctl_nolock lockless
> ipc,msg: reduce critical region in msgsnd
> ipc,msg: make shorten critical region in msgrcv
> ipc: remove unused functions
>
> ipc/msg.c | 227 ++++++++++++++++++++++++++++++++++++++-----------------------
> ipc/sem.c | 42 +++++++-----
> ipc/shm.c | 32 ++++++---
> ipc/util.c | 25 ++-----
> ipc/util.h | 22 ++++--
> 5 files changed, 211 insertions(+), 137 deletions(-)
>
> Thanks,
> Davidlohr
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists