[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <746b5a8752cc40b1b954913f786ed9a6@AcuMS.aculab.com>
Date: Mon, 24 Jun 2019 15:12:49 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Fenghua Yu' <fenghua.yu@...el.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
H Peter Anvin <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Hansen <dave.hansen@...el.com>,
"Paolo Bonzini" <pbonzini@...hat.com>,
Radim Krcmar <rkrcmar@...hat.com>,
Christopherson Sean J <sean.j.christopherson@...el.com>,
Ashok Raj <ashok.raj@...el.com>,
Tony Luck <tony.luck@...el.com>,
Dan Williams <dan.j.williams@...el.com>,
"Xiaoyao Li " <xiaoyao.li@...el.com>,
"Sai Praneeth Prakhya" <sai.praneeth.prakhya@...el.com>,
Ravi V Shankar <ravi.v.shankar@...el.com>
CC: linux-kernel <linux-kernel@...r.kernel.org>, x86 <x86@...nel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: RE: [PATCH v9 03/17] x86/split_lock: Align x86_capability to unsigned
long to avoid split locked access
From: Fenghua Yu
> Sent: 18 June 2019 23:41
>
> set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to
> operate on bitmap defined in x86_capability.
>
> Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode,
> the location is at:
> base address of x86_capability + (bit offset in x86_capability / 64) * 8
>
> Since base address of x86_capability may not be aligned to unsigned long,
> the single unsigned long location may cross two cache lines and
> accessing the location by locked BTS/BTR introductions will cause
> split lock.
>
> To fix the split lock issue, align x86_capability to size of unsigned long
> so that the location will be always within one cache line.
>
> Changing x86_capability's type to unsigned long may also fix the issue
> because x86_capability will be naturally aligned to size of unsigned long.
> But this needs additional code changes. So choose the simpler solution
> by setting the array's alignment to size of unsigned long.
As I've pointed out several times before this isn't the only int[] data item
in this code that gets passed to the bit operations.
Just because you haven't got a 'splat' from the others doesn't mean they don't
need fixing at the same time.
> Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
> ---
> arch/x86/include/asm/processor.h | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index c34a35c78618..d3e017723634 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -93,7 +93,9 @@ struct cpuinfo_x86 {
> __u32 extended_cpuid_level;
> /* Maximum supported CPUID level, -1=no CPUID: */
> int cpuid_level;
> - __u32 x86_capability[NCAPINTS + NBUGINTS];
> + /* Aligned to size of unsigned long to avoid split lock in atomic ops */
Wrong comment.
Something like:
/* Align to sizeof (unsigned long) because the array is passed to the
* atomic bit-op functions which require an aligned unsigned long []. */
> + __u32 x86_capability[NCAPINTS + NBUGINTS]
> + __aligned(sizeof(unsigned long));
It might be better to use a union (maybe unnamed) here.
> char x86_vendor_id[16];
> char x86_model_id[64];
> /* in KB - valid for CPUS which support this call: */
> --
> 2.19.1
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
Powered by blists - more mailing lists