[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <573021D5.4070406@hurleysoftware.com>
Date: Sun, 8 May 2016 22:36:21 -0700
From: Peter Hurley <peter@...leysoftware.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: mingo@...nel.org, peterz@...radead.org, tglx@...utronix.de,
Waiman.Long@....com, jason.low2@...com,
linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>
Subject: Re: [PATCH 1/4] locking/rwsem: Avoid stale ->count for
rwsem_down_write_failed()
On 05/08/2016 09:56 PM, Davidlohr Bueso wrote:
> The field is obviously updated w.o the lock and needs a READ_ONCE
> while waiting for lock holder(s) to go away, just like we do with
> all other ->count accesses.
This isn't actually fixing a bug because it's passed through
several full barriers which will force reloading from sem->count.
I think the patch is ok if you want it just for consistency anyway,
but please change $subject and changelog.
Regards,
Peter Hurley
> Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
> ---
> kernel/locking/rwsem-xadd.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index df4dcb883b50..7d62772600cf 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -494,7 +494,7 @@ __rwsem_down_write_failed_common(struct rw_semaphore *sem, int state)
> }
> schedule();
> set_current_state(state);
> - } while ((count = sem->count) & RWSEM_ACTIVE_MASK);
> + } while ((count = READ_ONCE(sem->count)) & RWSEM_ACTIVE_MASK);
>
> raw_spin_lock_irq(&sem->wait_lock);
> }
>
Powered by blists - more mailing lists