lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Jun 2019 16:54:47 -0700
From:   Fenghua Yu <fenghua.yu@...el.com>
To:     David Laight <David.Laight@...LAB.COM>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        H Peter Anvin <hpa@...or.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...el.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krcmar <rkrcmar@...hat.com>,
        Christopherson Sean J <sean.j.christopherson@...el.com>,
        Ashok Raj <ashok.raj@...el.com>,
        Tony Luck <tony.luck@...el.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Xiaoyao Li <xiaoyao.li@...el.com>,
        Sai Praneeth Prakhya <sai.praneeth.prakhya@...el.com>,
        Ravi V Shankar <ravi.v.shankar@...el.com>,
        linux-kernel <linux-kernel@...r.kernel.org>,
        x86 <x86@...nel.org>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>
Subject: Re: [PATCH v9 03/17] x86/split_lock: Align x86_capability to
 unsigned long to avoid split locked access

On Mon, Jun 24, 2019 at 03:12:49PM +0000, David Laight wrote:
> From: Fenghua Yu
> > Sent: 18 June 2019 23:41
> > 
> > set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to
> > operate on bitmap defined in x86_capability.
> > 
> > Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode,
> > the location is at:
> > base address of x86_capability + (bit offset in x86_capability / 64) * 8
> > 
> > Since base address of x86_capability may not be aligned to unsigned long,
> > the single unsigned long location may cross two cache lines and
> > accessing the location by locked BTS/BTR introductions will cause
> > split lock.
> > 
> > To fix the split lock issue, align x86_capability to size of unsigned long
> > so that the location will be always within one cache line.
> > 
> > Changing x86_capability's type to unsigned long may also fix the issue
> > because x86_capability will be naturally aligned to size of unsigned long.
> > But this needs additional code changes. So choose the simpler solution
> > by setting the array's alignment to size of unsigned long.
> 
> As I've pointed out several times before this isn't the only int[] data item
> in this code that gets passed to the bit operations.
> Just because you haven't got a 'splat' from the others doesn't mean they don't
> need fixing at the same time.

As Thomas suggested in https://lkml.org/lkml/2019/4/25/353, patch #0017
in this patch set implements WARN_ON_ONCE() to audit possible unalignment
in atomic bit ops.

This patch set just enables split lock detection first. Fixing ALL split
lock issues might be practical after the patch is upstreamed and used widely.

> 
> > Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
> > ---
> >  arch/x86/include/asm/processor.h | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> > index c34a35c78618..d3e017723634 100644
> > --- a/arch/x86/include/asm/processor.h
> > +++ b/arch/x86/include/asm/processor.h
> > @@ -93,7 +93,9 @@ struct cpuinfo_x86 {
> >  	__u32			extended_cpuid_level;
> >  	/* Maximum supported CPUID level, -1=no CPUID: */
> >  	int			cpuid_level;
> > -	__u32			x86_capability[NCAPINTS + NBUGINTS];
> > +	/* Aligned to size of unsigned long to avoid split lock in atomic ops */
> 
> Wrong comment.
> Something like:
> 	/* Align to sizeof (unsigned long) because the array is passed to the
> 	 * atomic bit-op functions which require an aligned unsigned long []. */

The problem we try to fix here is not because "the array is passed to the
atomic bit-op functions which require an aligned unsigned long []".

The problem is because of the possible split lock issue. If it's not because
of split lock issue, there is no need to have this patch.

So I would think my comment is right to point out explicitly why we need
this alignment.

> 
> > +	__u32			x86_capability[NCAPINTS + NBUGINTS]
> > +				__aligned(sizeof(unsigned long));
> 
> It might be better to use a union (maybe unnamed) here.

That would be another patch. This patch just simply fixes the split lock
issue.

Thanks.

-Fenghua

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ