[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8fb77d9d-1820-3d6a-78c2-dc0237bedac5@citrix.com>
Date: Fri, 9 Jun 2017 04:13:23 +0100
From: Andrew Cooper <andrew.cooper3@...rix.com>
To: Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm list <kvm@...r.kernel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Borislav Petkov <bp@...en8.de>,
Thomas Garnier <thgarnie@...gle.com>,
Juergen Gross <jgross@...e.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: Re: Speeding up VMX with GDT fixmap trickery?
On 09/06/2017 02:13, Andy Lutomirski wrote:
> Hi all-
>
> As promised when Thomas did his GDT fixmap work, here is a draft patch
> to speed up KVM by extending it.
>
> The downside of this patch is that it makes the fixmap significantly
> larger on 64-bit systems if NR_CPUS is large (it adds 15 more pages
> per CPU). I don't know if we care at all. It also bloats the kernel
> image by 4k and wastes 4k of RAM for the entire time the system is
> booted. We could avoid the latter bit (sort of) by not mapping the
> extra fixmap pages at all and handling the resulting faults somehow.
> That would scare me -- now we have IRET generating #PF when running
> malicious , and that way lies utter madness.
>
> The upside is that we don't need to do LGDT after a vmexit on VMX.
> LGDT is slooooooooooow. But no, I haven't benchmarked this yet.
>
> What do you all think?
>
> https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?h=x86/kvm&id=e249a09787d6956b52d8260b2326d8f12f768799
>
> Andrew/Boris/Juergen: what does Xen think about setting a very high
> GDT limit? Will it let us? Should I fix it by changing
> load_fixmap_gdt() (i.e. uncommenting the commented bit) or by teaching
> the Xen paravirt code to just ignore the monstrous limit? Or is it
> not a problem in the first place?
When running PV, any selector under 0xe000 is fair game, and anything
over that is Xen's.
OTOH, the set of software running as a PV guest, and also running KVM is
empty. An HVM guest (which when nested, is the only viable option to
run KVM) has total control over its GDT.
~Andrew
Powered by blists - more mailing lists