[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ftc1wq64.fsf@vitty.brq.redhat.com>
Date: Fri, 15 May 2020 10:36:19 +0200
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Michael Tsirkin <mst@...hat.com>,
Julia Suvorova <jsuvorov@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>, x86@...nel.org
Subject: Re: [PATCH RFC 4/5] KVM: x86: aggressively map PTEs in KVM_MEM_ALLONES slots
Sean Christopherson <sean.j.christopherson@...el.com> writes:
> On Thu, May 14, 2020 at 08:05:39PM +0200, Vitaly Kuznetsov wrote:
>> All PTEs in KVM_MEM_ALLONES slots point to the same read-only page
>> in KVM so instead of mapping each page upon first access we can map
>> everything aggressively.
>>
>> Suggested-by: Michael S. Tsirkin <mst@...hat.com>
>> Signed-off-by: Vitaly Kuznetsov <vkuznets@...hat.com>
>> ---
>> arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++++++++--
>> arch/x86/kvm/mmu/paging_tmpl.h | 23 +++++++++++++++++++++--
>> 2 files changed, 39 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
>> index 3db499df2dfc..e92ca9ed3ff5 100644
>> --- a/arch/x86/kvm/mmu/mmu.c
>> +++ b/arch/x86/kvm/mmu/mmu.c
>> @@ -4154,8 +4154,24 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
>> goto out_unlock;
>> if (make_mmu_pages_available(vcpu) < 0)
>> goto out_unlock;
>> - r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn,
>> - prefault, is_tdp && lpage_disallowed);
>> +
>> + if (likely(!(slot->flags & KVM_MEM_ALLONES) || write)) {
>
> The 'write' check is wrong. More specifically, patch 2/5 is missing code
> to add KVM_MEM_ALLONES to memslot_is_readonly(). If we end up going with
> an actual kvm_allones_pg backing, writes to an ALLONES memslots should be
> handled same as writes to RO memslots; MMIO occurs but no MMIO spte is
> created.
>
Missed that, thanks!
>> + r = __direct_map(vcpu, gpa, write, map_writable, max_level, pfn,
>> + prefault, is_tdp && lpage_disallowed);
>> + } else {
>> + /*
>> + * KVM_MEM_ALLONES are 4k only slots fully mapped to the same
>> + * readonly 'allones' page, map all PTEs aggressively here.
>> + */
>> + for (gfn = slot->base_gfn; gfn < slot->base_gfn + slot->npages;
>> + gfn++) {
>> + r = __direct_map(vcpu, gfn << PAGE_SHIFT, write,
>> + map_writable, max_level, pfn, prefault,
>> + is_tdp && lpage_disallowed);
>
> IMO this is a waste of memory and TLB entries. Why not treat the access as
> the MMIO it is and emulate the access with a 0xff return value? I think
> it'd be a simple change to have __kvm_read_guest_page() stuff 0xff, i.e. a
> kvm_allones_pg wouldn't be needed. I would even vote to never create an
> MMIO SPTE. The guest has bigger issues if reading from a PCI hole is
> performance sensitive.
You're trying to defeat the sole purpose of the feature :-) I also saw
the option you suggest but Michael convinced me we should go further.
The idea (besides memory waste) was that the time we spend on PCI scan
during boot is significant. Unfortunatelly, I don't have any numbers but
we can certainly try to get them. With this feature (AFAIU) we're not
aiming at 'classic' long-living VMs but rather at something like Kata
containers/FaaS/... where boot time is crucial.
>
> Regarding memory, looping wantonly on __direct_map() will eventually trigger
> the BUG_ON() in mmu_memory_cache_alloc(). mmu_topup_memory_caches() only
> ensures there are enough objects available to map a single translation, i.e.
> one entry per level, sans the root[*].
>
> [*] The gorilla math in mmu_topup_memory_caches() is horrendously misleading,
> e.g. the '8' pages is really 2*(ROOT_LEVEL - 1), but the 2x part has been
> obsolete for the better part of a decade, and the '- 1' wasn't actually
> originally intended or needed, but is now required because of 5-level
> paging. I have the beginning of a series to clean up that mess; it was
> low on my todo list because I didn't expect anyone to be mucking with
> related code :-)
I missed that too but oh well, this is famous KVM MMU, I should't feel
that bad about it :-) Thanks for your review!
>
>> + if (r)
>> + break;
>> + }
>> + }
>
--
Vitaly
Powered by blists - more mailing lists