[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1474225896-10066-1-git-send-email-dave@stgolabs.net>
Date: Sun, 18 Sep 2016 12:11:31 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: akpm@...ux-foundation.org
Cc: manfred@...orfullife.com, dave@...olabs.net,
linux-kernel@...r.kernel.org
Subject: [PATCH -next v2 0/5] ipc/sem: semop(2) improvements
Changes from v1 (https://lkml.org/lkml/2016/9/12/266)
- Got rid of the signal_pending check in wakeup fastpath. (patch 2)
- Added read/access once to queue.status (we're obviously concerned about
lockless access upon unrelated events, even if on the stack).
- Got rid of initializing wake_q and wake_up_q call upon perform_atomic_semop
error return path. (patch 2)
- Documented ordering between wake_q_add and setting ->status.
- What I did not do was refactor the checks in perfor_atomic_semop[_slow]
as I could not get a decent/clean way of doing it without adding more
unnecessary code. If we wanted to do smart semop scans that we received from
userspace, this would still need to be done under sem_lock for semval values
obviously. So I've left it as is, where we mainly duplicate the function, but
I still believe this is the most straightforward way of dealing with this
situation (patch 3).
- Replaced using SEMOP_FAST with BITS_PER_LONG, as this is really what we want
to limit the duplicate scanning.
- More testing.
- Added Manfred's ack (patch 5).
Hi,
Here are a few updates around the semop syscall handling that I noticed while
reviewing Manfred's simple vs complex ops fixes. Changes are on top of -next,
which means that Manfred's pending patches to ipc/sem.c that remove the redundant
barrier(s) would probably have to be rebased.
The patchset has survived the following testscases:
- ltp
- ipcsemtest (https://github.com/manfred-colorfu/ipcsemtest)
- ipcscale (https://github.com/manfred-colorfu/ipcscale)
Details are in each individual patch. Please consider for v4.9.
Thanks!
Davidlohr Bueso (5):
ipc/sem: do not call wake_sem_queue_do() prematurely
ipc/sem: rework task wakeups
ipc/sem: optimize perform_atomic_semop()
ipc/sem: explicitly inline check_restart
ipc/sem: use proper list api for pending_list wakeups
ipc/sem.c | 415 ++++++++++++++++++++++++++++++--------------------------------
1 file changed, 199 insertions(+), 216 deletions(-)
--
2.6.6
Powered by blists - more mailing lists