lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZrY1adEnEW2N-ijd@google.com>
Date: Fri, 9 Aug 2024 08:27:37 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Chao Gao <chao.gao@...el.com>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org, 
	Dave Hansen <dave.hansen@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>, 
	Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, 
	linux-kernel@...r.kernel.org, Paolo Bonzini <pbonzini@...hat.com>, x86@...nel.org
Subject: Re: [PATCH v3 2/2] VMX: reset the segment cache after segment
 initialization in vmx_vcpu_reset

On Tue, Aug 06, 2024, Chao Gao wrote:
> On Thu, Jul 25, 2024 at 01:52:32PM -0400, Maxim Levitsky wrote:
> >Fix this by moving the vmx_segment_cache_clear() call to be after the
> >segments are initialized.
> >
> >Note that this still doesn't fix the issue of kvm_arch_vcpu_in_kernel
> >getting stale data during the segment setup, and that issue will
> >be addressed later.
> >
> >Signed-off-by: Maxim Levitsky <mlevitsk@...hat.com>
> 
> Do you need a Fixes tag and/or Cc: stable?

Heh, it's an old one

  Fixes: 2fb92db1ec08 ("KVM: VMX: Cache vmcs segment fields")

> 
> >---
> > arch/x86/kvm/vmx/vmx.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> >diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> >index fa9f307d9b18..d43bb755e15c 100644
> >--- a/arch/x86/kvm/vmx/vmx.c
> >+++ b/arch/x86/kvm/vmx/vmx.c
> >@@ -4870,9 +4870,6 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> > 	vmx->hv_deadline_tsc = -1;
> > 	kvm_set_cr8(vcpu, 0);
> > 
> >-	vmx_segment_cache_clear(vmx);
> >-	kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS);
> >-
> > 	seg_setup(VCPU_SREG_CS);
> > 	vmcs_write16(GUEST_CS_SELECTOR, 0xf000);
> > 	vmcs_writel(GUEST_CS_BASE, 0xffff0000ul);
> >@@ -4899,6 +4896,9 @@ void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
> > 	vmcs_writel(GUEST_IDTR_BASE, 0);
> > 	vmcs_write32(GUEST_IDTR_LIMIT, 0xffff);
> > 
> >+	vmx_segment_cache_clear(vmx);
> >+	kvm_register_mark_available(vcpu, VCPU_EXREG_SEGMENTS);
> 
> vmx_segment_cache_clear() is called in a few other sites. I think at least the
> call in __vmx_set_segment() should be fixed, because QEMU may read SS.AR right
> after a write to it. if the write was preempted after the cache was cleared but
> before the new value being written into VMCS, QEMU would find that SS.AR held a
> stale value.

Ya, I thought the plan was to go for a more complete fix[*]?  This change isn't
wrong, but it's obviously incomplete, and will be unnecessary if the preemption
issue is resolved.

[*] https://lore.kernel.org/all/f183d215c903d4d1e85bf89e9d8b57dd6ce5c175.camel@redhat.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ