lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YtF+CF2FkS7Ho1d5@google.com>
Date:   Fri, 15 Jul 2022 14:47:36 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Yu Zhang <yu.c.zhang@...ux.intel.com>
Cc:     pbonzini@...hat.com, vkuznets@...hat.com, jmattson@...gle.com,
        joro@...tes.org, wanpengli@...cent.com, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] KVM: X86: Initialize 'fault' in
 kvm_fixup_and_inject_pf_error().

On Fri, Jul 15, 2022, Yu Zhang wrote:
> kvm_fixup_and_inject_pf_error() was introduced to fixup the error code(
> e.g., to add RSVD flag) and inject the #PF to the guest, when guest
> MAXPHYADDR is smaller than the host one.
> 
> When it comes to nested, L0 is expected to intercept and fix up the #PF
> and then inject to L2 directly if
> - L2.MAXPHYADDR < L0.MAXPHYADDR and
> - L1 has no intention to intercept L2's #PF (e.g., L2 and L1 have the
>   same MAXPHYADDR value && L1 is using EPT for L2),
> instead of constructing a #PF VM Exit to L1. Currently, with PFEC_MASK
> and PFEC_MATCH both set to 0 in vmcs02, the interception and injection
> may happen on all L2 #PFs.
> 
> However, failing to initialize 'fault' in kvm_fixup_and_inject_pf_error()
> may cause the fault.async_page_fault being NOT zeroed, and later the #PF
> being treated as a nested async page fault, and then being injected to L1.
> So just fix it by initialize the 'fault' value in the beginning.

Ouch.

> Fixes: 897861479c064 ("KVM: x86: Add helper functions for illegal GPA checking and page fault injection")
> Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=216178
> Reported-by: Yang Lixiao <lixiao.yang@...el.com>
> Signed-off-by: Yu Zhang <yu.c.zhang@...ux.intel.com>
> ---
>  arch/x86/kvm/x86.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 031678eff28e..3246b3c9dfb3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12983,7 +12983,7 @@ EXPORT_SYMBOL_GPL(kvm_spec_ctrl_test_value);
>  void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_code)
>  {
>  	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
> -	struct x86_exception fault;
> +	struct x86_exception fault = {0};
>  	u64 access = error_code &
>  		(PFERR_WRITE_MASK | PFERR_FETCH_MASK | PFERR_USER_MASK);

As stupid as it may be to intentionally not fix the uninitialized data in a robust
way, I'd actually prefer to manually clear fault.async_page_fault instead of
zero-initializing the struct.  Unlike a similar bug fix in commit 159e037d2e36
("KVM: x86: Fully initialize 'struct kvm_lapic_irq' in kvm_pv_kick_cpu_op()"),
this code actually cares about async_page_fault being false as opposed to just
being _initialized_.

And if another field is added to struct x86_exception in the future, leaving the
struct uninitialized means that if such a patch were to miss this case, running
with various sanitizers should in theory be able to detect such a bug.  I suspect
no one has found this with syzkaller due to the need to opt into running with
allow_smaller_maxphyaddr=1.

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f389691d8c04..aeed737b55c2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12996,6 +12996,7 @@ void kvm_fixup_and_inject_pf_error(struct kvm_vcpu *vcpu, gva_t gva, u16 error_c
                fault.error_code = error_code;
                fault.nested_page_fault = false;
                fault.address = gva;
+               fault.async_page_fault = false;
        }
        vcpu->arch.walk_mmu->inject_page_fault(vcpu, &fault);
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ