[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1538257387.173395561@decadent.org.uk>
Date: Sat, 29 Sep 2018 22:43:07 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, "Joerg Roedel" <joro@...tes.org>,
"Borislav Petkov" <bp@...e.de>,
"Konrad Rzeszutek Wilk" <konrad.wilk@...cle.com>,
"Thomas Gleixner" <tglx@...utronix.de>,
"Tom Lendacky" <thomas.lendacky@....com>,
"Mikhail Gavrilov" <mikhail.v.gavrilov@...il.com>,
kvm@...r.kernel.org, "Paolo Bonzini" <pbonzini@...hat.com>,
"Radim Krčmář" <rkrcmar@...hat.com>,
x86@...nel.org, "Matthew Wilcox" <willy@...radead.org>,
"Thomas Gleixner" <tglx@...xxxxxxxxxx>
Subject: [PATCH 3.16 049/131] KVM: x86: SVM: Call x86_spec_ctrl_set_guest/host()
with interrupts disabled
3.16.59-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@...xxxxxxxxxx>
commit 024d83cadc6b2af027e473720f3c3da97496c318 upstream.
Mikhail reported the following lockdep splat:
WARNING: possible irq lock inversion dependency detected
CPU 0/KVM/10284 just changed the state of lock:
000000000d538a88 (&st->lock){+...}, at:
speculative_store_bypass_update+0x10b/0x170
but this lock was taken by another, HARDIRQ-safe lock
in the past:
(&(&sighand->siglock)->rlock){-.-.}
and interrupts could create inverse lock ordering between them.
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&st->lock);
local_irq_disable();
lock(&(&sighand->siglock)->rlock);
lock(&st->lock);
<Interrupt>
lock(&(&sighand->siglock)->rlock);
*** DEADLOCK ***
The code path which connects those locks is:
speculative_store_bypass_update()
ssb_prctl_set()
do_seccomp()
do_syscall_64()
In svm_vcpu_run() speculative_store_bypass_update() is called with
interupts enabled via x86_virt_spec_ctrl_set_guest/host().
This is actually a false positive, because GIF=0 so interrupts are
disabled even if IF=1; however, we can easily move the invocations of
x86_virt_spec_ctrl_set_guest/host() into the interrupt disabled region to
cure it, and it's a good idea to keep the GIF=0/IF=1 area as small
and self-contained as possible.
Fixes: 1f50ddb4f418 ("x86/speculation: Handle HT correctly on AMD")
Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
Cc: Joerg Roedel <joro@...tes.org>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim Krčmář <rkrcmar@...hat.com>
Cc: Matthew Wilcox <willy@...radead.org>
Cc: Borislav Petkov <bp@...e.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Cc: Tom Lendacky <thomas.lendacky@....com>
Cc: kvm@...r.kernel.org
Cc: x86@...nel.org
Signed-off-by: Paolo Bonzini <pbonzini@...hat.com>
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
arch/x86/kvm/svm.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3983,8 +3983,6 @@ static void svm_vcpu_run(struct kvm_vcpu
clgi();
- local_irq_enable();
-
/*
* If this vCPU has touched SPEC_CTRL, restore the guest's value if
* it's non-zero. Since vmentry is serialising on affected CPUs, there
@@ -3993,6 +3991,8 @@ static void svm_vcpu_run(struct kvm_vcpu
*/
x86_spec_ctrl_set_guest(svm->spec_ctrl, svm->virt_spec_ctrl);
+ local_irq_enable();
+
asm volatile (
"push %%" _ASM_BP "; \n\t"
"mov %c[rbx](%[svm]), %%" _ASM_BX " \n\t"
@@ -4115,12 +4115,12 @@ static void svm_vcpu_run(struct kvm_vcpu
if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
- x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
-
reload_tss(vcpu);
local_irq_disable();
+ x86_spec_ctrl_restore_host(svm->spec_ctrl, svm->virt_spec_ctrl);
+
vcpu->arch.cr2 = svm->vmcb->save.cr2;
vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
Powered by blists - more mailing lists