[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53B5BE99.1090008@hp.com>
Date: Thu, 03 Jul 2014 16:35:37 -0400
From: Waiman Long <waiman.long@...com>
To: Jason Low <jason.low2@...com>
CC: Davidlohr Bueso <davidlohr@...com>,
Peter Zijlstra <peterz@...radead.org>,
torvalds@...ux-foundation.org, paulmck@...ux.vnet.ibm.com,
mingo@...nel.org, linux-kernel@...r.kernel.org, riel@...hat.com,
akpm@...ux-foundation.org, hpa@...or.com, andi@...stfloor.org,
James.Bottomley@...senpartnership.com, rostedt@...dmis.org,
tim.c.chen@...ux.intel.com, aswin@...com, scott.norton@...com,
chegu_vinod@...com
Subject: Re: [RFC] Cancellable MCS spinlock rework
On 07/03/2014 02:34 PM, Jason Low wrote:
> On Thu, 2014-07-03 at 10:09 -0700, Davidlohr Bueso wrote:
>> On Thu, 2014-07-03 at 09:31 +0200, Peter Zijlstra wrote:
>>> On Wed, Jul 02, 2014 at 10:30:03AM -0700, Jason Low wrote:
>>>> Would potentially reducing the size of the rw semaphore structure by 32
>>>> bits (for all architectures using optimistic spinning) be a nice
>>>> benefit?
>>> Possibly, although I had a look at the mutex structure and we didn't
>>> have a hole to place it in, unlike what you found with the rwsem.
>> Yeah, and currently struct rw_semaphore is the largest lock we have in
>> the kernel. Shaving off space is definitely welcome.
> Right, especially if it could help things like xfs inode.
>
I do see a point in reducing the size of the rwsem structure. However, I
don't quite understand the point of converting pointers in the
optimistic_spin_queue structure to atomic_t. The structure is cacheline
aligned and there is no saving in size. Converting them to atomic_t does
have a bit of additional overhead of converting the encoded cpu number
back to the actual pointer.
So my suggestion is to just change what is stored in the mutex and rwsem
structure to atomic_t, but keep the pointers in the
optimistic_spin_queue structure.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists