[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <48aefd0a-0987-7636-30f2-f67b06135460@redhat.com>
Date: Fri, 25 Jan 2019 18:52:02 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Tom Roeder <tmroeder@...gle.com>,
Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Jim Mattson <jmattson@...gle.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Liran Alon <liran.alon@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
syzbot+ded1696f6b50b615b630@...kaller.appspotmail.com
Subject: Re: [PATCH v2] kvm: x86/vmx: Use kzalloc for cached_vmcs12
On 24/01/19 22:48, Tom Roeder wrote:
> This changes the allocation of cached_vmcs12 to use kzalloc instead of
> kmalloc. This removes the information leak found by Syzkaller (see
> Reported-by) in this case and prevents similar leaks from happening
> based on cached_vmcs12.
>
> It also changes vmx_get_nested_state to copy out the full 4k VMCS12_SIZE
> in copy_to_user rather than only the size of the struct.
>
> Tested: rebuilt against head, booted, and ran the syszkaller repro
> https://syzkaller.appspot.com/text?tag=ReproC&x=174efca3400000 without
> observing any problems.
>
> Reported-by: syzbot+ded1696f6b50b615b630@...kaller.appspotmail.com
> Signed-off-by: Tom Roeder <tmroeder@...gle.com>
> ---
> Changelog since v1:
> - Changed the copy_to_user calls in vmx_get_nested_state to copy the
> full 4k buffer.
>
> arch/x86/kvm/vmx/nested.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 2616bd2c7f2c7..ce81539238547 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4140,11 +4140,11 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
> if (r < 0)
> goto out_vmcs02;
>
> - vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
> + vmx->nested.cached_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL);
> if (!vmx->nested.cached_vmcs12)
> goto out_cached_vmcs12;
>
> - vmx->nested.cached_shadow_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
> + vmx->nested.cached_shadow_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL);
> if (!vmx->nested.cached_shadow_vmcs12)
> goto out_cached_shadow_vmcs12;
>
> @@ -5263,13 +5263,17 @@ static int vmx_get_nested_state(struct kvm_vcpu *vcpu,
> copy_shadow_to_vmcs12(vmx);
> }
>
> - if (copy_to_user(user_kvm_nested_state->data, vmcs12, sizeof(*vmcs12)))
> + /*
> + * Copy over the full allocated size of vmcs12 rather than just the size
> + * of the struct.
> + */
> + if (copy_to_user(user_kvm_nested_state->data, vmcs12, VMCS12_SIZE))
> return -EFAULT;
>
> if (nested_cpu_has_shadow_vmcs(vmcs12) &&
> vmcs12->vmcs_link_pointer != -1ull) {
> if (copy_to_user(user_kvm_nested_state->data + VMCS12_SIZE,
> - get_shadow_vmcs12(vcpu), sizeof(*vmcs12)))
> + get_shadow_vmcs12(vcpu), VMCS12_SIZE))
> return -EFAULT;
> }
>
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists