[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1402597547.2627.4.camel@buesod1.americas.hpqcorp.net>
Date: Thu, 12 Jun 2014 11:25:47 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: Jason Low <jason.low2@...com>
Cc: mingo@...nel.org, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
tim.c.chen@...ux.intel.com, paulmck@...ux.vnet.ibm.com,
rostedt@...dmis.org, Waiman.Long@...com, scott.norton@...com,
aswin@...com
Subject: Re: [PATCH v2 4/4] mutex: Optimize mutex trylock slowpath
On Wed, 2014-06-11 at 11:37 -0700, Jason Low wrote:
> The mutex_trylock() function calls into __mutex_trylock_fastpath() when
> trying to obtain the mutex. On 32 bit x86, in the !__HAVE_ARCH_CMPXCHG
> case, __mutex_trylock_fastpath() calls directly into __mutex_trylock_slowpath()
> regardless of whether or not the mutex is locked.
>
> In __mutex_trylock_slowpath(), we then acquire the wait_lock spinlock, xchg()
> lock->count with -1, then set lock->count back to 0 if there are no waiters,
> and return true if the prev lock count was 1.
>
> However, if the mutex is already locked, then there isn't much point
> in attempting all of the above expensive operations. In this patch, we only
> attempt the above trylock operations if the mutex is unlocked.
>
> Signed-off-by: Jason Low <jason.low2@...com>
This is significantly cleaner than the v1 patch.
Reviewed-by: Davidlohr Bueso <davidlohr@...com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists