[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6da8548-6d4e-7537-05b6-8d32812c49d2@colorfullife.com>
Date: Sat, 1 Jul 2017 21:23:03 +0200
From: Manfred Spraul <manfred@...orfullife.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
linux-kernel@...r.kernel.org
Cc: netfilter-devel@...r.kernel.org, netdev@...r.kernel.org,
oleg@...hat.com, akpm@...ux-foundation.org, mingo@...hat.com,
dave@...olabs.net, tj@...nel.org, arnd@...db.de,
linux-arch@...r.kernel.org, will.deacon@....com,
peterz@...radead.org, stern@...land.harvard.edu,
parri.andrea@...il.com, torvalds@...ux-foundation.org
Subject: Re: [PATCH RFC 06/26] ipc: Replace spin_unlock_wait() with
lock/unlock pair
On 06/30/2017 02:01 AM, Paul E. McKenney wrote:
> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair. This commit therefore replaces the spin_unlock_wait() call in
> exit_sem() with spin_lock() followed immediately by spin_unlock().
> This should be safe from a performance perspective because exit_sem()
> is rarely invoked in production.
>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Davidlohr Bueso <dave@...olabs.net>
> Cc: Manfred Spraul <manfred@...orfullife.com>
> Cc: Will Deacon <will.deacon@....com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Andrea Parri <parri.andrea@...il.com>
> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Acked-by: Manfred Spraul <manfred@...orfullife.com>
> ---
> ipc/sem.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/ipc/sem.c b/ipc/sem.c
> index 947dc2348271..e88d0749a929 100644
> --- a/ipc/sem.c
> +++ b/ipc/sem.c
> @@ -2096,7 +2096,8 @@ void exit_sem(struct task_struct *tsk)
> * possibility where we exit while freeary() didn't
> * finish unlocking sem_undo_list.
> */
> - spin_unlock_wait(&ulp->lock);
> + spin_lock(&ulp->lock);
> + spin_unlock(&ulp->lock);
> rcu_read_unlock();
> break;
> }
Powered by blists - more mailing lists