[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YLdmiTfgz1EqBsJd@google.com>
Date: Wed, 2 Jun 2021 11:07:53 +0000
From: Quentin Perret <qperret@...gle.com>
To: Yanan Wang <wangyanan55@...wei.com>
Cc: Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>,
Alexandru Elisei <alexandru.elisei@....com>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Gavin Shan <gshan@...hat.com>, wanghaibin.wang@...wei.com,
zhukeqian1@...wei.com, yuzenghui@...wei.com
Subject: Re: [PATCH v5 6/6] KVM: arm64: Distinguish cases of memcache
allocations completely
On Thursday 15 Apr 2021 at 19:50:32 (+0800), Yanan Wang wrote:
> With a guest translation fault, the memcache pages are not needed if KVM
> is only about to install a new leaf entry into the existing page table.
> And with a guest permission fault, the memcache pages are also not needed
> for a write_fault in dirty-logging time if KVM is only about to update
> the existing leaf entry instead of collapsing a block entry into a table.
>
> By comparing fault_granule and vma_pagesize, cases that require allocations
> from memcache and cases that don't can be distinguished completely.
>
> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
> ---
> arch/arm64/kvm/mmu.c | 25 ++++++++++++-------------
> 1 file changed, 12 insertions(+), 13 deletions(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index aa536392b308..9e35aa5d29f2 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -895,19 +895,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> gfn = fault_ipa >> PAGE_SHIFT;
> mmap_read_unlock(current->mm);
>
> - /*
> - * Permission faults just need to update the existing leaf entry,
> - * and so normally don't require allocations from the memcache. The
> - * only exception to this is when dirty logging is enabled at runtime
> - * and a write fault needs to collapse a block entry into a table.
> - */
> - if (fault_status != FSC_PERM || (logging_active && write_fault)) {
> - ret = kvm_mmu_topup_memory_cache(memcache,
> - kvm_mmu_cache_min_pages(kvm));
> - if (ret)
> - return ret;
> - }
> -
> mmu_seq = vcpu->kvm->mmu_notifier_seq;
> /*
> * Ensure the read of mmu_notifier_seq happens before we call
> @@ -970,6 +957,18 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> else if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
> prot |= KVM_PGTABLE_PROT_X;
>
> + /*
> + * Allocations from the memcache are required only when granule of the
> + * lookup level where the guest fault happened exceeds vma_pagesize,
> + * which means new page tables will be created in the fault handlers.
> + */
> + if (fault_granule > vma_pagesize) {
> + ret = kvm_mmu_topup_memory_cache(memcache,
> + kvm_mmu_cache_min_pages(kvm));
> + if (ret)
> + return ret;
> + }
You're now doing the top-up in the kvm->mmu_lock critical section. Isn't
this more or less what we try to avoid by using a memory cache?
Thanks,
Quentin
Powered by blists - more mailing lists