lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1552431636-31511-5-git-send-email-fenghua.yu@intel.com>
Date:   Tue, 12 Mar 2019 16:00:22 -0700
From:   Fenghua Yu <fenghua.yu@...el.com>
To:     "Thomas Gleixner" <tglx@...utronix.de>,
        "Ingo Molnar" <mingo@...hat.com>, "H Peter Anvin" <hpa@...or.com>,
        "Dave Hansen" <dave.hansen@...el.com>,
        "Paolo Bonzini" <pbonzini@...hat.com>,
        "Ashok Raj" <ashok.raj@...el.com>,
        "Peter Zijlstra" <peterz@...radead.org>,
        "Xiaoyao Li " <xiaoyao.li@...el.com>,
        "Michael Chan" <michael.chan@...adcom.com>,
        "Ravi V Shankar" <ravi.v.shankar@...el.com>
Cc:     "linux-kernel" <linux-kernel@...r.kernel.org>,
        "x86" <x86@...nel.org>, linux-wireless@...r.kernel.org,
        netdev@...r.kernel.org, kvm@...r.kernel.org,
        Fenghua Yu <fenghua.yu@...el.com>
Subject: [PATCH v5 04/18] x86/split_lock: Align x86_capability to unsigned long to avoid split locked access

set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to
operate on bitmap defined in x86_capability.

Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode,
the location is at:
base address of x86_capability + (bit offset in x86_capability % 64) * 8

Since base address of x86_capability may not aligned to unsigned long,
the single unsigned long location may cross two cache lines and
accessing the location by locked BTS/BTR introductions will trigger #AC.

To fix the split lock issue, align x86_capability to unsigned long so that
the location will be always within one cache line.

Changing x86_capability[]'s type to unsigned long may also fix the issue
because x86_capability[] will be naturally aligned to unsigned long. But
this needs additional code changes. So we choose the simpler solution
by enforcing alignment using __aligned(unsigned long).

Signed-off-by: Fenghua Yu <fenghua.yu@...el.com>
---
 arch/x86/include/asm/processor.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 2bb3a648fc12..b92296595fbe 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -93,7 +93,9 @@ struct cpuinfo_x86 {
 	__u32			extended_cpuid_level;
 	/* Maximum supported CPUID level, -1=no CPUID: */
 	int			cpuid_level;
-	__u32			x86_capability[NCAPINTS + NBUGINTS];
+	/* Unsigned long alignment to avoid split lock in atomic bitmap ops */
+	__u32			x86_capability[NCAPINTS + NBUGINTS]
+				__aligned(sizeof(unsigned long));
 	char			x86_vendor_id[16];
 	char			x86_model_id[64];
 	/* in KB - valid for CPUS which support this call: */
-- 
2.19.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ