[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <159610528353.4006.8299813904303562704.tip-bot2@tip-bot2>
Date: Thu, 30 Jul 2020 10:34:43 -0000
From: "tip-bot2 for Thomas Gleixner" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Qian Cai <cai@....pw>, Thomas Gleixner <tglx@...utronix.de>,
x86 <x86@...nel.org>, LKML <linux-kernel@...r.kernel.org>
Subject: [tip: x86/entry] x86/kvm: Use __xfer_to_guest_mode_work_pending() in
kvm_run_vcpu()
The following commit has been merged into the x86/entry branch of tip:
Commit-ID: f3020b8891b890b48d9e1a83241e3cce518427c1
Gitweb: https://git.kernel.org/tip/f3020b8891b890b48d9e1a83241e3cce518427c1
Author: Thomas Gleixner <tglx@...utronix.de>
AuthorDate: Thu, 30 Jul 2020 09:19:01 +02:00
Committer: Thomas Gleixner <tglx@...utronix.de>
CommitterDate: Thu, 30 Jul 2020 12:31:47 +02:00
x86/kvm: Use __xfer_to_guest_mode_work_pending() in kvm_run_vcpu()
The comments explicitely explain that the work flags check and handling in
kvm_run_vcpu() is done with preemption and interrupts enabled as KVM
invokes the check again right before entering guest mode with interrupts
disabled which guarantees that the work flags are observed and handled
before VMENTER.
Nevertheless the flag pending check in kvm_run_vcpu() uses the helper
variant which requires interrupts to be disabled triggering an instant
lockdep splat. This was caught in testing before and then not fixed up in
the patch before applying. :(
Use the relaxed and intentionally racy __xfer_to_guest_mode_work_pending()
instead.
Fixes: 72c3c0fe54a3 ("x86/kvm: Use generic xfer to guest work function")
Reported-by: Qian Cai <cai@....pw> writes:
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Link: https://lkml.kernel.org/r/87bljxa2sa.fsf@nanos.tec.linutronix.de
---
arch/x86/kvm/x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 82d4a9e..5325972 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8682,7 +8682,7 @@ static int vcpu_run(struct kvm_vcpu *vcpu)
break;
}
- if (xfer_to_guest_mode_work_pending()) {
+ if (__xfer_to_guest_mode_work_pending()) {
srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx);
r = xfer_to_guest_mode_handle_work(vcpu);
if (r)
Powered by blists - more mailing lists