lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50042b05-2e9c-8483-710c-0f0eafc658e0@colorfullife.com>
Date:   Mon, 19 Sep 2016 20:40:57 +0200
From:   Manfred Spraul <manfred@...orfullife.com>
To:     Davidlohr Bueso <dave@...olabs.net>, akpm@...ux-foundation.org
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next v2 0/5] ipc/sem: semop(2) improvements

On 09/18/2016 09:11 PM, Davidlohr Bueso wrote:
> Changes from v1 (https://lkml.org/lkml/2016/9/12/266)
> - Got rid of the signal_pending check in wakeup fastpath. (patch 2)
> - Added read/access once to queue.status (we're obviously concerned about
>   lockless access upon unrelated events, even if on the stack).
> - Got rid of initializing wake_q and wake_up_q call upon perform_atomic_semop
>    error return path. (patch 2)
> - Documented ordering between wake_q_add and setting ->status.
> - What I did not do was refactor the checks in perfor_atomic_semop[_slow]
>    as I could not get a decent/clean way of doing it without adding more
>    unnecessary code. If we wanted to do smart semop scans that we received from
>    userspace, this would still need to be done under sem_lock for semval values
>    obviously. So I've left it as is, where we mainly duplicate the function, but
>    I still believe this is the most straightforward way of dealing with this
>    situation  (patch 3).
> - Replaced using SEMOP_FAST with BITS_PER_LONG, as this is really what we want
>    to limit the duplicate scanning.
> - More testing.
> - Added Manfred's ack (patch 5).
>
> Hi,
>
> Here are a few updates around the semop syscall handling that I noticed while
> reviewing Manfred's simple vs complex ops fixes. Changes are on top of -next,
> which means that Manfred's pending patches to ipc/sem.c that remove the redundant
> barrier(s) would probably have to be rebased.
>
> The patchset has survived the following testscases:
> - ltp
> - ipcsemtest (https://github.com/manfred-colorfu/ipcsemtest)
> - ipcscale (https://github.com/manfred-colorfu/ipcscale)
>
> Details are in each individual patch. Please consider for v4.9.
>
> Thanks!
>
> Davidlohr Bueso (5):
>    ipc/sem: do not call wake_sem_queue_do() prematurely
The only patch that I don't like.
Especially: patch 2 of the series removes the wake_up_q from the 
function epilogue.
So only the code duplication (additional instances of rcu_read_unlock()) 
remains, I don't see any advantages.

>    ipc/sem: rework task wakeups
Acked
>    ipc/sem: optimize perform_atomic_semop()
I'm still thinking about it.
Code duplication is evil, but perhaps it is the best solution.

What I don't like is the hardcoded "< BITS_PER_LONG".
At least:
- (1 << sop->sem_num)
+ (1 << (sop->sem_num%BITS_PER_LONG))
>    ipc/sem: explicitly inline check_restart
Do we really need that? Isn't that the compiler's task?
Especially since the compiler is already doing it correctly.
>    ipc/sem: use proper list api for pending_list wakeups
Acked
>   ipc/sem.c | 415 ++++++++++++++++++++++++++++++--------------------------------
>   1 file changed, 199 insertions(+), 216 deletions(-)
>
--

     Manfred

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ