[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9ffc1936-dbdc-52c8-bbd4-24c773728452@amd.com>
Date: Wed, 18 Dec 2019 15:18:06 -0600
From: Tom Lendacky <thomas.lendacky@....com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Brijesh Singh <brijesh.singh@....com>
Subject: Re: [PATCH v1 1/2] KVM: x86/mmu: Allow for overriding MMIO SPTE mask
On 12/18/19 2:27 PM, Sean Christopherson wrote:
> On Wed, Dec 18, 2019 at 01:51:23PM -0600, Tom Lendacky wrote:
>> On 12/18/19 1:45 PM, Tom Lendacky wrote:
>>> The KVM MMIO support uses bit 51 as the reserved bit to cause nested page
>>> faults when a guest performs MMIO. The AMD memory encryption support uses
>>> CPUID functions to define the encryption bit position. Given this, KVM
>>> can't assume that bit 51 will be safe all the time.
>>>
>>> Add a callback to return a reserved bit(s) mask that can be used for the
>>> MMIO pagetable entries. The callback is not responsible for setting the
>>> present bit.
>>>
>>> If a callback is registered:
>>> - any non-zero mask returned is updated with the present bit and used
>>> as the MMIO SPTE mask.
>>> - a zero mask returned results in a mask with only bit 51 set (i.e. no
>>> present bit) as the MMIO SPTE mask, similar to the way 52-bit physical
>>> addressing is handled.
>>>
>>> If no callback is registered, the current method of setting the MMIO SPTE
>>> mask is used.
>>>
>>> Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs")
>>> Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
>>> ---
>>> arch/x86/include/asm/kvm_host.h | 4 ++-
>>> arch/x86/kvm/mmu/mmu.c | 54 +++++++++++++++++++++------------
>>> arch/x86/kvm/x86.c | 2 +-
>>> 3 files changed, 38 insertions(+), 22 deletions(-)
>>
>> This patch has some extra churn because kvm_x86_ops isn't set yet when the
>> call to kvm_set_mmio_spte_mask() is made. If it's not a problem to move
>> setting kvm_x86_ops just a bit earlier in kvm_arch_init(), some of the
>> churn can be avoided.
>
> As a completely different alternative, what about handling this purely
> within SVM code by overriding the masks during svm_hardware_setup(),
> similar to how VMX handles EPT's custom masks, e.g.:
>
> /*
> * Override the MMIO masks if memory encryption support is enabled:
> * The physical addressing width is reduced. The first bit above the
> * new physical addressing limit will always be reserved.
> */
> if (cpuid_eax(0x80000000) >= 0x8000001f) {
> rdmsrl(MSR_K8_SYSCFG, msr);
> if (msr & MSR_K8_SYSCFG_MEM_ENCRYPT) {
> mask = BIT_ULL(boot_cpu_data.x86_phys_bits) | BIT_ULL(0);
> kvm_mmu_set_mmio_spte_mask(mask, mask,
> ACC_WRITE_MASK | ACC_USER_MASK);
> }
> }
Works for me if no one has objections to doing it that way (and will
actually make going into stable much easier).
Thanks,
Tom
>
Powered by blists - more mailing lists