[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1455298335-53229-1-git-send-email-Waiman.Long@hpe.com>
Date: Fri, 12 Feb 2016 12:32:11 -0500
From: Waiman Long <Waiman.Long@....com>
To: Ingo Molnar <mingo@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Ding Tianhong <dingtianhong@...wei.com>,
Jason Low <jason.low2@....com>,
Davidlohr Bueso <dave@...olabs.net>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <Will.Deacon@....com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Waiman Long <Waiman.Long@....com>
Subject: [PATCH v2 0/4] locking/mutex: Enable optimistic spinning of lock waiter
v1->v2:
- Set task state to running before doing optimistic spinning.
- Add 2 more patches to handle possible missed wakeups and wasteful
spinning in try_to_wake_up() function.
This patchset is a variant of PeterZ's "locking/mutex: Avoid spinner
vs waiter starvation" patch. The major difference is that the
waiter-spinner won't enter into the OSQ used by the spinners. Instead,
it will spin directly on the lock in parallel with the queue head
of the OSQ. So there will be a bit more cacheline contention on the
lock cacheline, but that shouldn't cause noticeable impact on system
performance.
This patchset tries to address 2 issues with Peter's patch:
1) Ding Tianhong still find that hanging task could happen in some cases.
2) Jason Low found that there was performance regression for some AIM7
workloads.
By making the waiter-spinner to spin directly on the mutex, it will
increase the chance for the waiter-spinner to get the lock instead
of waiting in the OSQ for its turn.
Patch 1 modifies the mutex_optimistic_spin() function to enable it
to be called by a waiter-spinner that doesn't need to go into the OSQ.
Patch 2 modifies the mutex locking slowpath to make the waiter call
mutex_optimistic_spin() to do spinning after being waken up.
Patch 3 reverses the sequence of setting task state and changing
mutex count to -1 to prevent the possibility of missed wakeup.
Patch 4 modifies the wakeup code to abandon the wakeup operation
while spinning on the on_cpu flag if the task has changed back to a
non-sleeping state.
My own test on a 4-socket E7-4820 v3 system showed a regression of
about 4% in the high_systime workload with Peter's patch which this
new patch effectively eliminates.
Testing on an 8-socket Westmere-EX server, however, has performance
change from -9% to than +140% on the fserver workload of AIM7
depending on how the system was set up.
Waiman Long (4):
locking/mutex: Add waiter parameter to mutex_optimistic_spin()
locking/mutex: Enable optimistic spinning of woken task in wait queue
locking/mutex: Avoid missed wakeup of mutex waiter
sched/fair: Abort wakeup when task is no longer in a sleeping state
kernel/locking/mutex.c | 119 ++++++++++++++++++++++++++++++++++--------------
kernel/sched/core.c | 9 +++-
2 files changed, 92 insertions(+), 36 deletions(-)
Powered by blists - more mailing lists