[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 7 Aug 2018 19:29:49 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Will Deacon <will.deacon@....com>
Cc: linux-kernel@...r.kernel.org, Joe Mario <jmario@...hat.com>,
Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v3] locking/rwsem: Exit read lock slowpath if queue empty
& no writer
On 07/24/2018 03:10 PM, Waiman Long wrote:
> It was discovered that a constant stream of readers with occassional
> writers pounding on a rwsem may cause many of the readers to enter the
> slowpath unnecessarily thus increasing latency and lowering performance.
>
> In the current code, a reader entering the slowpath critical section
> will unconditionally set the WAITING_BIAS, if not set yet, and clear
> its active count even if no one is in the wait queue and no writer
> is present. This causes some incoming readers to observe the presence
> of waiters in the wait queue and hence have to go into the slowpath
> themselves.
>
> With sufficient numbers of readers and a relatively short lock hold time,
> the WAITING_BIAS may be repeatedly turned on and off and a substantial
> portion of the readers will go into the slowpath sustaining a rather
> long queue in the wait queue spinlock and repeated WAITING_BIAS on/off
> cycle until the logjam is broken opportunistically.
>
> To avoid this situation from happening, an additional check is added to
> detect the special case that the reader in the critical section is the
> only one in the wait queue and no writer is present. When that happens,
> it can just exit the slowpath and return immediately as its active count
> has already been set in the lock. Other incoming readers won't observe
> the presence of waiters and so will not be forced into the slowpath.
>
> The issue was found in a customer site where they had an application
> that pounded on the pread64 syscalls heavily on an XFS filesystem. The
> application was run in a recent 4-socket boxes with a lot of CPUs. They
> saw significant spinlock contention in the rwsem_down_read_failed() call.
> With this patch applied, the system CPU usage went down from 85% to 57%,
> and the spinlock contention in the pread64 syscalls was gone.
>
> v3: Revise the commit log and comment again.
> v2: Add customer testing results and remove wording that may cause
> confusion.
>
> Signed-off-by: Waiman Long <longman@...hat.com>
> ---
> kernel/locking/rwsem-xadd.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index 3064c50..01fcb80 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -233,8 +233,19 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
> waiter.type = RWSEM_WAITING_FOR_READ;
>
> raw_spin_lock_irq(&sem->wait_lock);
> - if (list_empty(&sem->wait_list))
> + if (list_empty(&sem->wait_list)) {
> + /*
> + * In case the wait queue is empty and the lock isn't owned
> + * by a writer, this reader can exit the slowpath and return
> + * immediately as its RWSEM_ACTIVE_READ_BIAS has already
> + * been set in the count.
> + */
> + if (atomic_long_read(&sem->count) >= 0) {
> + raw_spin_unlock_irq(&sem->wait_lock);
> + return sem;
> + }
> adjustment += RWSEM_WAITING_BIAS;
> + }
> list_add_tail(&waiter.list, &sem->wait_list);
>
> /* we're now waiting on the lock, but no longer actively locking */
Will this patch be eligible to go into 4.19 or 4.20?
Thanks,
Longman
Powered by blists - more mailing lists