[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1389747999.4971.27.camel@buesod1.americas.hpqcorp.net>
Date: Tue, 14 Jan 2014 17:06:39 -0800
From: Davidlohr Bueso <davidlohr@...com>
To: Jason Low <jason.low2@...com>
Cc: mingo@...hat.com, peterz@...radead.org, paulmck@...ux.vnet.ibm.com,
Waiman.Long@...com, torvalds@...ux-foundation.org,
tglx@...utronix.de, linux-kernel@...r.kernel.org, riel@...hat.com,
akpm@...ux-foundation.org, hpa@...or.com, aswin@...com,
scott.norton@...com
Subject: Re: [RFC 3/3] mutex: When there is no owner, stop spinning after
too many tries
On Tue, 2014-01-14 at 16:33 -0800, Jason Low wrote:
> When running workloads that have high contention in mutexes on an 8 socket
> machine, spinners would often spin for a long time with no lock owner.
>
> One of the potential reasons for this is because a thread can be preempted
> after clearing lock->owner but before releasing the lock
What happens if you invert the order here? So mutex_clear_owner() is
called after the actual unlocking (__mutex_fastpath_unlock).
> or preempted after
> acquiring the mutex but before setting lock->owner.
That would be the case _only_ for the fastpath. For the slowpath
(including optimistic spinning) preemption is already disabled at that
point.
> In those cases, the
> spinner cannot check if owner is not on_cpu because lock->owner is NULL.
>
> A solution that would address the preemption part of this problem would
> be to disable preemption between acquiring/releasing the mutex and
> setting/clearing the lock->owner. However, that will require adding overhead
> to the mutex fastpath.
It's not uncommon to disable preemption in hotpaths, the overhead should
be quite smaller, actually.
>
> The solution used in this patch is to limit the # of times thread can spin on
> lock->count when !owner.
>
> The threshold used in this patch for each spinner was 128, which appeared to
> be a generous value, but any suggestions on another method to determine
> the threshold are welcomed.
Hmm generous compared to what? Could you elaborate further on how you
reached this value? These kind of magic numbers have produced
significant debate in the past.
Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists