[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191121201951.GY4097@hirez.programming.kicks-ass.net>
Date: Thu, 21 Nov 2019 21:19:51 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Fenghua Yu <fenghua.yu@...el.com>
Cc: Andy Lutomirski <luto@...capital.net>,
Andy Lutomirski <luto@...nel.org>,
David Laight <David.Laight@...lab.com>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Tony Luck <tony.luck@...el.com>,
Ashok Raj <ashok.raj@...el.com>,
Ravi V Shankar <ravi.v.shankar@...el.com>,
linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>
Subject: Re: [PATCH v10 6/6] x86/split_lock: Enable split lock detection by
kernel parameter
On Thu, Nov 21, 2019 at 12:25:35PM -0800, Fenghua Yu wrote:
> > > We are working on a separate patch set to fix all split lock issues
> > > in atomic bitops. Per Peter Anvin and Tony Luck suggestions:
> > > 1. Still keep the byte optimization if nr is constant. No split lock.
> > > 2. If type of *addr is unsigned long, do quadword atomic instruction
> > > on addr. No split lock.
> > > 3. If type of *addr is unsigned int, do word atomic instruction
> > > on addr. No split lock.
> > > 4. Otherwise, re-calculate addr to point the 32-bit address which contains
> > > the bit and operate on the bit. No split lock.
> Actually we only find 8 places calling atomic bitops using type casting
> "unsigned long *". After above changes, other 8 patches remove the type
> castings and then split lock free in atomic bitops in the current kernel.
Those above changes are never going to happen.
Powered by blists - more mailing lists