[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200315050517.127446-3-xiaoyao.li@intel.com>
Date: Sun, 15 Mar 2020 13:05:10 +0800
From: Xiaoyao Li <xiaoyao.li@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
hpa@...or.com, Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
kvm@...r.kernel.org, x86@...nel.org, linux-kernel@...r.kernel.org
Cc: Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Arvind Sankar <nivedita@...m.mit.edu>,
Fenghua Yu <fenghua.yu@...el.com>,
Tony Luck <tony.luck@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Jim Mattson <jmattson@...gle.com>,
Xiaoyao Li <xiaoyao.li@...el.com>
Subject: [PATCH v5 2/9] x86/split_lock: Avoid runtime reads of the TEST_CTRL MSR
In a context switch from a task that is detecting split locks
to one that is not (or vice versa) we need to update the TEST_CTRL
MSR. Currently this is done with the common sequence:
read the MSR
flip the bit
write the MSR
in order to avoid changing the value of any reserved bits in the MSR.
Cache the value of the TEST_CTRL MSR when we read it during initialization
so we can avoid an expensive RDMSR instruction during context switch.
Suggested-by: Sean Christopherson <sean.j.christopherson@...el.com>
Originally-by: Tony Luck <tony.luck@...el.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
---
arch/x86/kernel/cpu/intel.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 064ba12defc8..4b3245035b5a 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -1020,6 +1020,14 @@ static void __init split_lock_setup(void)
}
}
+/*
+ * Soft copy of MSR_TEST_CTRL initialized when we first read the
+ * MSR. Used at runtime to avoid using rdmsr again just to collect
+ * the reserved bits in the MSR. We assume reserved bits are the
+ * same on all CPUs.
+ */
+static u64 test_ctrl_val;
+
/*
* Locking is not required at the moment because only bit 29 of this
* MSR is implemented and locking would not prevent that the operation
@@ -1027,16 +1035,14 @@ static void __init split_lock_setup(void)
*/
static void __sld_msr_set(bool on)
{
- u64 test_ctrl_val;
-
- rdmsrl(MSR_TEST_CTRL, test_ctrl_val);
+ u64 val = test_ctrl_val;
if (on)
- test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
else
- test_ctrl_val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ val &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
- wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
+ wrmsrl(MSR_TEST_CTRL, val);
}
/*
@@ -1048,11 +1054,13 @@ static void __sld_msr_set(bool on)
*/
static void split_lock_init(struct cpuinfo_x86 *c)
{
- u64 test_ctrl_val;
+ u64 val;
- if (rdmsrl_safe(MSR_TEST_CTRL, &test_ctrl_val))
+ if (rdmsrl_safe(MSR_TEST_CTRL, &val))
goto msr_broken;
+ test_ctrl_val = val;
+
switch (sld_state) {
case sld_off:
if (wrmsrl_safe(MSR_TEST_CTRL, test_ctrl_val & ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT))
--
2.20.1
Powered by blists - more mailing lists