[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190403161003.GL4038@hirez.programming.kicks-ass.net>
Date: Wed, 3 Apr 2019 18:10:03 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Alex Kogan <alex.kogan@...cle.com>
Cc: linux@...linux.org.uk, mingo@...hat.com, will.deacon@....com,
arnd@...db.de, longman@...hat.com, linux-arch@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
tglx@...utronix.de, bp@...en8.de, hpa@...or.com, x86@...nel.org,
steven.sistare@...cle.com, daniel.m.jordan@...cle.com,
dave.dice@...cle.com, rahul.x.yadav@...cle.com
Subject: Re: [PATCH v2 3/5] locking/qspinlock: Introduce CNA into the slow
path of qspinlock
On Wed, Apr 03, 2019 at 11:53:53AM -0400, Alex Kogan wrote:
> > One thing we could maybe do is change locked and count to u8, then your
> > overlay structure could be something like:
> >
> > struct mcs_spinlock {
> > struct mcs_spinlock *next;
> > u8 locked;
> > u8 count;
> > };
> I was trying to keep the size of the mcs_spinlock structure for the non-NUMA variant unchanged.
> If this is not a huge concern, changing the fields as above would indeed simplify a few things.
Well, sizeof(struct mcs_spinlock) is unchanged by the above proposal
(for x86_64).
And I don't think it matters for x86, which is very good at byte
accesses, my only concern would be for other architectures that might
not be as good at byte accesses. For instance Alpha <EV56 would generate
shit code, but then, Alpha isn't using qspinlock anyway.
Powered by blists - more mailing lists