[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140311104503.GA10916@gmail.com>
Date: Tue, 11 Mar 2014 11:45:03 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <waiman.long@...com>, arnd@...db.de,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org, rostedt@...dmis.org,
akpm@...ux-foundation.org, walken@...gle.com, andi@...stfloor.org,
riel@...hat.com, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, oleg@...hat.com
Subject: Re: [RFC][PATCH 0/7] locking: qspinlock
* Peter Zijlstra <peterz@...radead.org> wrote:
> Hi Waiman,
>
> I promised you this series a number of days ago; sorry for the delay
> I've been somewhat unwell :/
>
> That said, these few patches start with a (hopefully) simple and
> correct form of the queue spinlock, and then gradually build upon
> it, explaining each optimization as we go.
>
> Having these optimizations as separate patches helps twofold;
> firstly it makes one aware of which exact optimizations were done,
> and secondly it allows one to proove or disprove any one step;
> seeing how they should be mostly identity transforms.
>
> The resulting code is near to what you posted I think; however it
> has one atomic op less in the pending wait-acquire case for NR_CPUS
> != huge. It also doesn't do lock stealing; its still perfectly fair
> afaict.
>
> Have I missed any tricks from your code?
Waiman, you indicated in the other thread that these look good to you,
right? If so then I can queue them up so that they form a base for
further work.
It would be nice to have per patch performance measurements though ...
this split-up structure really enables that rather nicely.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists