[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00b0f9eb-286b-72e8-40b5-02f9576f2ce3@redhat.com>
Date: Thu, 3 Sep 2020 20:03:27 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Jim Mattson <jmattson@...gle.com>,
Mohammed Gamal <mgamal@...hat.com>
Cc: kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>
Subject: Re: [PATCH] KVM: x86: VMX: Make smaller physical guest address space
support user-configurable
On 03/09/20 19:57, Jim Mattson wrote:
> On Thu, Sep 3, 2020 at 7:12 AM Mohammed Gamal <mgamal@...hat.com> wrote:
>> This patch exposes allow_smaller_maxphyaddr to the user as a module parameter.
>>
>> Since smaller physical address spaces are only supported on VMX, the parameter
>> is only exposed in the kvm_intel module.
>> Modifications to VMX page fault and EPT violation handling will depend on whether
>> that parameter is enabled.
>>
>> Also disable support by default, and let the user decide if they want to enable
>> it.
>
> I think a smaller guest physical address width *should* be allowed.
> However, perhaps the pedantic adherence to the architectural
> specification could be turned on or off per-VM? And, if we're going to
> be pedantic, I think we should go all the way and get MOV-to-CR3
> correct.
That would be way too slow. Even the current trapping of present #PF
can introduce some slowdown depending on the workload.
> Does the typical guest care about whether or not setting any of the
> bits 51:46 in a PFN results in a fault?
At least KVM with shadow pages does, which is a bit niche but it shows
that you cannot really rely on no one doing it. As you guessed, the
main usage of the feature is for machines with 5-level page tables where
there are no reserved bits; emulating smaller MAXPHYADDR allows
migrating VMs from 4-level page-table hosts.
Enabling per-VM would not be particularly useful IMO because if you want
to disable this code you can just set host MAXPHYADDR = guest
MAXPHYADDR, which should be the common case unless you want to do that
kind of Skylake to Icelake (or similar) migration.
Paolo
Powered by blists - more mailing lists