[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXEgOf1JzTmdRP6u@google.com>
Date: Thu, 21 Oct 2021 17:09:29 +0900
From: Sergey Senozhatsky <senozhatsky@...omium.org>
To: David Matlack <dmatlack@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Suleiman Souhlal <suleiman@...gle.com>,
kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>
Subject: Re: [PATCHV2 1/3] KVM: x86: introduce kvm_mmu_pte_prefetch structure
On (21/10/21 12:28), Sergey Senozhatsky wrote:
> >
> > We are using TDP. And somehow I never see (literally never) async PFs.
> > It's always either hva_to_pfn_fast() or hva_to_pfn_slow() or
> > __direct_map() from tdp_page_fault().
>
> Hmm, and tdp_page_fault()->fast_page_fault() always fails on
> !is_access_allowed(error_code, new_spte), it never handles the faults.
> And I see some ->mmu_lock contention:
>
> spin_lock(&vcpu->kvm->mmu_lock);
> __direct_map();
> spin_unlock(&vcpu->kvm->mmu_lock);
>
> So it might be that we setup guest memory wrongly and never get
> advantages of TPD and fast page faults?
No, never mind, that's probably expected and ->mmu_lock contention is not
severe.
Powered by blists - more mailing lists