lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Nov 2017 11:06:01 +0100
From:   Dmitry Vyukov <dvyukov@...gle.com>
To:     Liran Alon <LIRAN.ALON@...cle.com>
Cc:     Wanpeng Li <kernellwp@...il.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        kvm <kvm@...r.kernel.org>, Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Wanpeng Li <wanpeng.li@...mail.com>
Subject: Re: [PATCH] KVM: VMX: Fix vmx->nested freeing when no SMI handler

On Wed, Nov 22, 2017 at 10:43 AM, Liran Alon <LIRAN.ALON@...cle.com> wrote:
>
>
> On 22/11/17 11:31, Wanpeng Li wrote:
>>
>> 2017-11-22 17:07 GMT+08:00 Liran Alon <LIRAN.ALON@...cle.com>:
>>>
>>>
>>>
>>> On 22/11/17 10:45, Liran Alon wrote:
>>>>
>>>>
>>>>
>>>>
>>>> On 22/11/17 09:56, Wanpeng Li wrote:
>>>>>
>>>>>
>>>>> From: Wanpeng Li <wanpeng.li@...mail.com>
>>>>>
>>>>> Reported by syzkaller:
>>>>>
>>>>>      ------------[ cut here ]------------
>>>>>      WARNING: CPU: 5 PID: 2939 at arch/x86/kvm/vmx.c:3844
>>>>> free_loaded_vmcs+0x77/0x80 [kvm_intel]
>>>>>      CPU: 5 PID: 2939 Comm: repro Not tainted 4.14.0+ #26
>>>>>      RIP: 0010:free_loaded_vmcs+0x77/0x80 [kvm_intel]
>>>>>      Call Trace:
>>>>>       vmx_free_vcpu+0xda/0x130 [kvm_intel]
>>>>>       kvm_arch_destroy_vm+0x192/0x290 [kvm]
>>>>>       kvm_put_kvm+0x262/0x560 [kvm]
>>>>>       kvm_vm_release+0x2c/0x30 [kvm]
>>>>>       __fput+0x190/0x370
>>>>>       task_work_run+0xa1/0xd0
>>>>>       do_exit+0x4d2/0x13e0
>>>>>       do_group_exit+0x89/0x140
>>>>>       get_signal+0x318/0xb80
>>>>>       do_signal+0x8c/0xb40
>>>>>       exit_to_usermode_loop+0xe4/0x140
>>>>>       syscall_return_slowpath+0x206/0x230
>>>>>       entry_SYSCALL_64_fastpath+0x98/0x9a
>>>>>
>>>>> The syzkaller testcase will execute VMXON/VMLAUCH instructions, so the
>>>>> vmx->nested stuff is populated, it will also issue KVM_SMI ioctl.
>>>>> However,
>>>>> the testcase is just a simple c program and not be lauched by something
>>>>> like seabios which implements smi_handler. Commit 05cade71cf (KVM:
>>>>> nSVM:
>>>>> fix SMI injection in guest mode) gets out of guest mode and set
>>>>> nested.vmxon
>>>>> to false for the duration of SMM according to SDM 34.14.1 "leave VMX
>>>>> operation" upon entering SMM. We can't alloc/free the vmx->nested stuff
>>>>> each time when entering/exiting SMM since it will induce more
>>>>> overhead. So
>>>>> the function vmx_pre_enter_smm() marks nested.vmxon false even if
>>>>> vmx->nested
>>>>> stuff is still populated. What it expected is em_rsm() can mark
>>>>> nested.vmxon
>>>>> to be true again. However, the smi_handler/rsm will not execute since
>>>>> there
>>>>> is no something like seabios in this scenario. The function
>>>>> free_nested()
>>>>> fails to free the vmx->nested stuff since the vmx->nested.vmxon is
>>>>> false
>>>>> which results in the above warning.
>>>>>
>>>>> This patch fixes it by also considering the no SMI handler case,
>>>>> luckily
>>>>> vmx->nested.smm.vmxon is marked according to the value of
>>>>> vmx->nested.vmxon
>>>>> in vmx_pre_enter_smm(), we can take advantage of it and free
>>>>> vmx->nested
>>>>> stuff when L1 goes down.
>>>>>
>>>>> Reported-by: Dmitry Vyukov <dvyukov@...gle.com>
>>>>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>>>>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>>>>> Cc: Dmitry Vyukov <dvyukov@...gle.com>
>>>>> Fixes: 05cade71cf (KVM: nSVM: fix SMI injection in guest mode)
>>>>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>>>>> ---
>>>>>    arch/x86/kvm/vmx.c | 2 +-
>>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>>>>> index dccc0f7..ed22425 100644
>>>>> --- a/arch/x86/kvm/vmx.c
>>>>> +++ b/arch/x86/kvm/vmx.c
>>>>> @@ -7372,7 +7372,7 @@ static inline void nested_release_vmcs12(struct
>>>>> vcpu_vmx *vmx)
>>>>>     */
>>>>>    static void free_nested(struct vcpu_vmx *vmx)
>>>>>    {
>>>>> -    if (!vmx->nested.vmxon)
>>>>> +    if (!vmx->nested.vmxon && !vmx->nested.smm.vmxon)
>>>>>            return;
>>>>>
>>>>>        vmx->nested.vmxon = false;
>>>>>
>>>> Funny bug. Great analysis.
>>>> Reviewed-by: Liran Alon <liran.alon@...cle.com>
>>>
>>>
>>> Actually, I would add one more thing to patch:
>>> I think we should also set "vmx->nested.smm.vmxon = false;" after
>>> "vmx->nested.vmxon = false;" to correctlyhandle the case VMXOFF is
>>> executed
>>> from SMI handler. Otherwise, when SMI handler executes RSM, we will reach
>>> vmx_pre_leave_smm() which will set again "vmx->nested.vmxon = true;"
>>> which I
>>> think shouldn't happen.
>>
>>
>> I didn't see a real scenario for this.
>
> Actually I later saw that handle_vmoff() calls nested_vmx_check_permission()
> which indeed won't allow to continue executing if running from SMI because
> vmx->nested.vmxon=false; and therefore this will raise a #UD. So you are
> right. :)

We will also see what syzkaller thinks about this :)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ