[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190123183322.GB160275@google.com>
Date: Wed, 23 Jan 2019 10:33:22 -0800
From: Tom Roeder <tmroeder@...gle.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Radim Krčmář <rkrcmar@...hat.com>,
Liran Alon <liran.alon@...cle.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>, x86@...nel.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
syzbot+ded1696f6b50b615b630@...kaller.appspotmail.com
Subject: Re: [RFC PATCH] kvm: x86/vmx: Use kzalloc for cached_vmcs12
On Tue, Jan 15, 2019 at 09:51:11AM -0800, Tom Roeder wrote:
> On Mon, Jan 14, 2019 at 06:43:04PM -0800, Sean Christopherson wrote:
> > On Mon, Jan 14, 2019 at 03:47:28PM -0800, Tom Roeder wrote:
> > > This changes the allocation of cached_vmcs12 to use kzalloc instead of
> > > kmalloc. This removes the information leak found by Syzkaller (see
> > > Reported-by) in this case and prevents similar leaks from happening
> > > based on cached_vmcs12.
> >
> > Is the leak specific to vmx_set_nested_state(), e.g. can we zero out
> > the memory if copy_from_user() fails instead of taking the hit on every
> > allocation?
>
> I don't know if the leak is specific to vmx_set_nested_state.
I've looked at the code more now, and it looks to me like there might be
other cases where the memory could leak. But I don't know enough of the
flows to be sure. The enter_vmx_operation function is called in
handle_vmon, and no data is copied from the guest immediately after
that. So, it depends on what happens after VMXON.
Even in vmx_set_nested_state, there are about 30 lines of code in
between enter_vmx_operation and copy_from_user, and there are a couple
of cases that cause vmx_set_nested_state to return with an error. So if
we want to fix this by handling all the error paths, I think it might be
cleanest to convert vmx_set_nested_state to use goto error handling,
since that would allow us to clear the allocated memory in one place.
What do you think?
Powered by blists - more mailing lists