[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8644ca1b-caa3-0a20-efd7-826ad1cbddd9@redhat.com>
Date: Wed, 17 Apr 2019 13:16:39 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will.deacon@....com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, x86@...nel.org,
Davidlohr Bueso <dave@...olabs.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
huang ying <huang.ying.caritas@...il.com>
Subject: Re: [PATCH v4 10/16] locking/rwsem: Wake up almost all readers in
wait queue
On 04/17/2019 09:39 AM, Peter Zijlstra wrote:
> On Sat, Apr 13, 2019 at 01:22:53PM -0400, Waiman Long wrote:
>> When the front of the wait queue is a reader, other readers
>> immediately following the first reader will also be woken up at the
>> same time. However, if there is a writer in between. Those readers
>> behind the writer will not be woken up.
>>
>> Because of optimistic spinning, the lock acquisition order is not FIFO
>> anyway. The lock handoff mechanism will ensure that lock starvation
>> will not happen.
>>
>> Assuming that the lock hold times of the other readers still in the
>> queue will be about the same as the readers that are being woken up,
>> there is really not much additional cost other than the additional
>> latency due to the wakeup of additional tasks by the waker. Therefore
>> all the readers up to a maximum of 256 in the queue are woken up when
>> the first waiter is a reader to improve reader throughput.
>>
>> With a locking microbenchmark running on 5.1 based kernel, the total
>> locking rates (in kops/s) on a 8-socket IvyBridge-EX system with
>> equal numbers of readers and writers before and after this patch were
>> as follows:
>>
>> # of Threads Pre-Patch Post-patch
>> ------------ --------- ----------
>> 4 1,641 1,674
>> 8 731 1,062
>> 16 564 924
>> 32 78 300
>> 64 38 195
>> 240 50 149
>>
>> There is no performance gain at low contention level. At high contention
>> level, however, this patch gives a pretty decent performance boost.
> Right, so this basically completes the convertion from task-fair (FIFO)
> to phase-fair.
>
> https://cs.unc.edu/~anderson/papers/rtsj10-for-web.pdf
Right, the changes that I am making is similar in concept to the
phase-fair rwlock mentioned in the article. That is an interesting
article even though I was not aware of it before you brought it up.
Cheers,
Longman
Powered by blists - more mailing lists