lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h7vqeq8o.fsf@vitty.brq.redhat.com>
Date:   Thu, 04 Jun 2020 18:43:19 +0200
From:   Vitaly Kuznetsov <vkuznets@...hat.com>
To:     Sean Christopherson <sean.j.christopherson@...el.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: nVMX: Inject #GP when nested_vmx_get_vmptr() fails to read guest memory

Sean Christopherson <sean.j.christopherson@...el.com> writes:

> On Thu, Jun 04, 2020 at 05:33:25PM +0200, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@...el.com> writes:
>> 
>> > On Thu, Jun 04, 2020 at 04:40:52PM +0200, Paolo Bonzini wrote:
>> >> On 04/06/20 16:31, Vitaly Kuznetsov wrote:
>> >
>> > ...
>> >
>> >> > KVM could've handled the request correctly by going to userspace and
>> >> > performing I/O but there doesn't seem to be a good need for such requests
>> >> > in the first place. Sane guests should not call VMXON/VMPTRLD/VMCLEAR with
>> >> > anything but normal memory. Just inject #GP to find insane ones.
>> >> > 
>> 
>> ...
>> 
>> >> 
>> >> looks good but we need to do the same in handle_vmread, handle_vmwrite,
>> >> handle_invept and handle_invvpid.  Which probably means adding something
>> >> like nested_inject_emulation_fault to commonize the inner "if".
>> >
>> > Can we just kill the guest already instead of throwing more hacks at this
>> > and hoping something sticks?  We already have one in
>> > kvm_write_guest_virt_system...
>> >
>> >   commit 541ab2aeb28251bf7135c7961f3a6080eebcc705
>> >   Author: Fuqian Huang <huangfq.daxian@...il.com>
>> >   Date:   Thu Sep 12 12:18:17 2019 +0800
>> >
>> >     KVM: x86: work around leak of uninitialized stack contents
>> >
>> 
>> Oh I see...
>> 
>> [...]
>> 
>> Let's get back to 'vm_bugged' idea then? 
>> 
>> https://lore.kernel.org/kvm/87muadnn1t.fsf@vitty.brq.redhat.com/
>
> Hmm, I don't think we need to go that far.  The 'vm_bugged' idea was more
> to handle cases where KVM itself (or hardware) screwed something up and
> detects an issue deep in a call stack with no recourse for reporting the
> error up the stack.
>
> That isn't the case here.  Unless I'm mistaken, the end result is simliar
> to this patch, except that KVM would exit to userspace with
> KVM_INTERNAL_ERROR_EMULATION instead of injecting a #GP.  E.g.

I just wanted to resurrect that 'vm_bugged' idea but was waiting for a
good opportunity :-)

The advantage of KVM_EXIT_INTERNAL_ERROR is that we're not trying to
invent some behavior which is not in SDM and making it a bit more likely
that we get a bug report from an angry user.

>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 9c74a732b08d..e13d2c0014e2 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4624,6 +4624,20 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
>         }
>  }
>
> +static int nested_vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int ret,
> +                                           struct x86_exception *e)
> +{
> +       if (r == X86EMUL_PROPAGATE_FAULT) {
> +               kvm_inject_emulated_page_fault(vcpu, &e);
> +               return 1;
> +       }
> +
> +       vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> +       vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
> +       vcpu->run->internal.ndata = 0;
> +       return 0;
> +}
> +
>  static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)
>  {
>         gva_t gva;
> @@ -4634,11 +4648,9 @@ static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)
>                                 sizeof(*vmpointer), &gva))
>                 return 1;
>
> -       if (kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e)) {
> -               kvm_inject_emulated_page_fault(vcpu, &e);
> -               return 1;
> -       }
> -
> +       r kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e);
> +       if (r)
> +               return nested_vmx_handle_memory_failure(r, &e);
>         return 0;
>  }
>

... and the same for handle_vmread, handle_vmwrite, handle_invept and
handle_invvpid as suggested by Paolo. I'll be sending this as v2 with
your Suggested-by: shortly.

>
>
> Side topic, I have some preliminary patches for the 'vm_bugged' idea.  I'll
> try to whip them into something that can be posted upstream in the next few
> weeks.
>

Sounds great!

-- 
Vitaly

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ