[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dff2bbe0-60b1-8367-b033-ce7b1eb79354@redhat.com>
Date: Thu, 4 Jun 2020 18:11:29 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM: nVMX: Inject #GP when nested_vmx_get_vmptr() fails
to read guest memory
On 04/06/20 18:02, Sean Christopherson wrote:
> On Thu, Jun 04, 2020 at 05:33:25PM +0200, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopherson@...el.com> writes:
>>
>>> On Thu, Jun 04, 2020 at 04:40:52PM +0200, Paolo Bonzini wrote:
>>>> On 04/06/20 16:31, Vitaly Kuznetsov wrote:
>>>
>>> ...
>>>
>>>>> KVM could've handled the request correctly by going to userspace and
>>>>> performing I/O but there doesn't seem to be a good need for such requests
>>>>> in the first place. Sane guests should not call VMXON/VMPTRLD/VMCLEAR with
>>>>> anything but normal memory. Just inject #GP to find insane ones.
>>>>>
>>
>> ...
>>
>>>>
>>>> looks good but we need to do the same in handle_vmread, handle_vmwrite,
>>>> handle_invept and handle_invvpid. Which probably means adding something
>>>> like nested_inject_emulation_fault to commonize the inner "if".
>>>
>>> Can we just kill the guest already instead of throwing more hacks at this
>>> and hoping something sticks? We already have one in
>>> kvm_write_guest_virt_system...
>>>
>>> commit 541ab2aeb28251bf7135c7961f3a6080eebcc705
>>> Author: Fuqian Huang <huangfq.daxian@...il.com>
>>> Date: Thu Sep 12 12:18:17 2019 +0800
>>>
>>> KVM: x86: work around leak of uninitialized stack contents
>>>
>>
>> Oh I see...
>>
>> [...]
>>
>> Let's get back to 'vm_bugged' idea then?
>>
>> https://lore.kernel.org/kvm/87muadnn1t.fsf@vitty.brq.redhat.com/
>
> Hmm, I don't think we need to go that far. The 'vm_bugged' idea was more
> to handle cases where KVM itself (or hardware) screwed something up and
> detects an issue deep in a call stack with no recourse for reporting the
> error up the stack.
>
> That isn't the case here. Unless I'm mistaken, the end result is simliar
> to this patch, except that KVM would exit to userspace with
> KVM_INTERNAL_ERROR_EMULATION instead of injecting a #GP.
Indeed, all these functions are very high on the call stack and what
Sean has scribbled below would apply to all cases.
Thanks,
Paolo
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 9c74a732b08d..e13d2c0014e2 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4624,6 +4624,20 @@ void nested_vmx_pmu_entry_exit_ctls_update(struct kvm_vcpu *vcpu)
> }
> }
>
> +static int nested_vmx_handle_memory_failure(struct kvm_vcpu *vcpu, int ret,
> + struct x86_exception *e)
> +{
> + if (r == X86EMUL_PROPAGATE_FAULT) {
> + kvm_inject_emulated_page_fault(vcpu, &e);
> + return 1;
> + }
> +
> + vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
> + vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
> + vcpu->run->internal.ndata = 0;
> + return 0;
> +}
> +
> static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)
> {
> gva_t gva;
> @@ -4634,11 +4648,9 @@ static int nested_vmx_get_vmptr(struct kvm_vcpu *vcpu, gpa_t *vmpointer)
> sizeof(*vmpointer), &gva))
> return 1;
>
> - if (kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e)) {
> - kvm_inject_emulated_page_fault(vcpu, &e);
> - return 1;
> - }
> -
> + r kvm_read_guest_virt(vcpu, gva, vmpointer, sizeof(*vmpointer), &e);
> + if (r)
> + return nested_vmx_handle_memory_failure(r, &e);
> return 0;
> }
>
>
>
> Side topic, I have some preliminary patches for the 'vm_bugged' idea. I'll
> try to whip them into something that can be posted upstream in the next few
> weeks.
>
Powered by blists - more mailing lists