lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri,  8 Oct 2021 18:01:35 -0700
From:   Sean Christopherson <seanjc@...gle.com>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     Sean Christopherson <seanjc@...gle.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org, Maxim Levitsky <mlevitsk@...hat.com>
Subject: [PATCH 2/2] KVM: x86: Simplify APICv update request logic

Drop confusing and flawed code that intentionally sets that per-VM APICv
inhibit mask after sending KVM_REQ_APICV_UPDATE to all vCPUs.  The code
is confusing because it's not obvious that there's no race between a CPU
seeing the request and consuming the new mask.  The code works only
because the request handling path takes the same lock, i.e. responding
vCPUs will be blocked until the full update completes.

The concept is flawed because ordering the mask update after the request
can't be relied upon for correct behavior.  The only guarantee provided
by kvm_make_all_cpus_request() is that all vCPUs exited the guest.  It
does not guarantee all vCPUs are waiting on the lock.  E.g. a VCPU could
be in the process of handling an emulated MMIO APIC access page fault
that occurred before the APICv update was initiated, and thus toggling
and reading the per-VM field would be racy.  If correctness matters, KVM
either needs to use the per-vCPU status (if appropriate), take the lock,
or have some other mechanism that guarantees the per-VM status is correct.

Cc: Maxim Levitsky <mlevitsk@...hat.com>
Signed-off-by: Sean Christopherson <seanjc@...gle.com>
---
 arch/x86/kvm/x86.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4a52a08707de..960c2d196843 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9431,29 +9431,27 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_update_apicv);
 
 void __kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)
 {
-	unsigned long old, new;
+	unsigned long old;
 
 	if (!kvm_x86_ops.check_apicv_inhibit_reasons ||
 	    !static_call(kvm_x86_check_apicv_inhibit_reasons)(bit))
 		return;
 
-	old = new = kvm->arch.apicv_inhibit_reasons;
+	old = kvm->arch.apicv_inhibit_reasons;
 
 	if (activate)
-		__clear_bit(bit, &new);
+		__clear_bit(bit, &kvm->arch.apicv_inhibit_reasons);
 	else
-		__set_bit(bit, &new);
+		__set_bit(bit, &kvm->arch.apicv_inhibit_reasons);
 
-	if (!!old != !!new) {
+	if (!!old != !!kvm->arch.apicv_inhibit_reasons) {
 		trace_kvm_apicv_update_request(activate, bit);
 		kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);
-		kvm->arch.apicv_inhibit_reasons = new;
-		if (new) {
+		if (kvm->arch.apicv_inhibit_reasons) {
 			unsigned long gfn = gpa_to_gfn(APIC_DEFAULT_PHYS_BASE);
 			kvm_zap_gfn_range(kvm, gfn, gfn+1);
 		}
-	} else
-		kvm->arch.apicv_inhibit_reasons = new;
+	}
 }
 EXPORT_SYMBOL_GPL(__kvm_request_apicv_update);
 
-- 
2.33.0.882.g93a45727a2-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ