[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130410102829.GA28505@gmail.com>
Date: Wed, 10 Apr 2013 12:28:29 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <Waiman.Long@...com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
David Howells <dhowells@...hat.com>,
Dave Jones <davej@...hat.com>,
Clark Williams <williams@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Davidlohr Bueso <davidlohr.bueso@...com>,
linux-kernel@...r.kernel.org,
"Chandramouleeswaran, Aswin" <aswin@...com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
"Norton, Scott J" <scott.norton@...com>,
Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH RFC 1/3] mutex: Make more scalable by doing less atomic
operations
* Waiman Long <Waiman.Long@...com> wrote:
> > Furthermore, since you are seeing this effect so profoundly, have you
> > considered using another approach, such as queueing all the poll-waiters in
> > some fashion?
> >
> > That would optimize your workload additionally: removing the 'stampede' of
> > trylock attempts when an unlock happens - only a single wait-poller would get
> > the lock.
>
> The mutex code in the slowpath has already put the waiters into a sleep queue
> and wait up only one at a time.
Yes - but I'm talking about spin/poll-waiters.
> [...] However, there are 2 additional source of mutex lockers besides those in
> the sleep queue:
>
> 1. New tasks trying to acquire the mutex and currently in the fast path.
> 2. Mutex spinners (CONFIG_MUTEX_SPIN_ON_OWNER on) who are spinning
> on the owner field and ready to acquire the mutex once the owner
> field change.
>
> The 2nd and 3rd patches are my attempts to limit the second types of mutex
> lockers.
Even the 1st patch seems to do that, it limits the impact of spin-loopers, right?
I'm fine with patch #1 [your numbers are proof enough that it helps while the low
client count effect seems to be in the noise] - the questions that seem open to me
are:
- Could the approach in patch #1 be further improved by an additional patch that
adds queueing to the _spinners_ in some fashion - like ticket spin locks try to
do in essence? Not queue the blocked waiters (they are already queued), but the
active spinners. This would have additional benefits, especially with a high
CPU count and a high NUMA factor, by removing the stampede effect as owners get
switched.
- Why does patch #2 have an effect? (it shouldn't at first glance) It has a
non-trivial cost, it increases the size of 'struct mutex' by 8 bytes, which
structure is embedded in numerous kernel data structures. When doing
comparisons I'd suggest comparing it not to just vanilla, but to a patch that
only extends the struct mutex data structure (and changes no code) - this
allows the isolation of cache layout change effects.
- Patch #3 is rather ugly - and my hope would be that if spinners are queued in
some fashion it becomes unnecessary.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists