[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210617124557.GB24457@willie-the-truck>
Date: Thu, 17 Jun 2021 13:45:57 +0100
From: Will Deacon <will@...nel.org>
To: Yanan Wang <wangyanan55@...wei.com>
Cc: Marc Zyngier <maz@...nel.org>, Quentin Perret <qperret@...gle.com>,
Alexandru Elisei <alexandru.elisei@....com>,
kvmarm@...ts.cs.columbia.edu, linux-arm-kernel@...ts.infradead.org,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Gavin Shan <gshan@...hat.com>, wanghaibin.wang@...wei.com,
zhukeqian1@...wei.com, yuzenghui@...wei.com
Subject: Re: [PATCH v7 4/4] KVM: arm64: Move guest CMOs to the fault handlers
On Thu, Jun 17, 2021 at 06:58:24PM +0800, Yanan Wang wrote:
> We currently uniformly permorm CMOs of D-cache and I-cache in function
> user_mem_abort before calling the fault handlers. If we get concurrent
> guest faults(e.g. translation faults, permission faults) or some really
> unnecessary guest faults caused by BBM, CMOs for the first vcpu are
> necessary while the others later are not.
>
> By moving CMOs to the fault handlers, we can easily identify conditions
> where they are really needed and avoid the unnecessary ones. As it's a
> time consuming process to perform CMOs especially when flushing a block
> range, so this solution reduces much load of kvm and improve efficiency
> of the stage-2 page table code.
>
> We can imagine two specific scenarios which will gain much benefit:
> 1) In a normal VM startup, this solution will improve the efficiency of
> handling guest page faults incurred by vCPUs, when initially populating
> stage-2 page tables.
> 2) After live migration, the heavy workload will be resumed on the
> destination VM, however all the stage-2 page tables need to be rebuilt
> at the moment. So this solution will ease the performance drop during
> resuming stage.
>
> Signed-off-by: Yanan Wang <wangyanan55@...wei.com>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 38 +++++++++++++++++++++++++++++-------
> arch/arm64/kvm/mmu.c | 37 ++++++++++++++---------------------
> 2 files changed, 46 insertions(+), 29 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index d99789432b05..760c551f61da 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -577,12 +577,24 @@ static void stage2_put_pte(kvm_pte_t *ptep, struct kvm_s2_mmu *mmu, u64 addr,
> mm_ops->put_page(ptep);
> }
>
> +static bool stage2_pte_cacheable(struct kvm_pgtable *pgt, kvm_pte_t pte)
> +{
> + u64 memattr = pte & KVM_PTE_LEAF_ATTR_LO_S2_MEMATTR;
> + return memattr == KVM_S2_MEMATTR(pgt, NORMAL);
> +}
> +
> +static bool stage2_pte_executable(kvm_pte_t pte)
> +{
> + return !(pte & KVM_PTE_LEAF_ATTR_HI_S2_XN);
> +}
> +
> static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
> kvm_pte_t *ptep,
> struct stage2_map_data *data)
> {
> kvm_pte_t new, old = *ptep;
> u64 granule = kvm_granule_size(level), phys = data->phys;
> + struct kvm_pgtable *pgt = data->mmu->pgt;
> struct kvm_pgtable_mm_ops *mm_ops = data->mm_ops;
>
> if (!kvm_block_mapping_supported(addr, end, phys, level))
> @@ -606,6 +618,14 @@ static int stage2_map_walker_try_leaf(u64 addr, u64 end, u32 level,
> stage2_put_pte(ptep, data->mmu, addr, level, mm_ops);
> }
>
> + /* Perform CMOs before installation of the guest stage-2 PTE */
> + if (mm_ops->clean_invalidate_dcache && stage2_pte_cacheable(pgt, new))
> + mm_ops->clean_invalidate_dcache(kvm_pte_follow(new, mm_ops),
> + granule);
> +
> + if (mm_ops->invalidate_icache && stage2_pte_executable(new))
> + mm_ops->invalidate_icache(kvm_pte_follow(new, mm_ops), granule);
One thing I'm missing here is why we need the indirection via mm_ops. Are
there cases where we would want to pass a different function pointer for
invalidating the icache? If not, why not just call the function directly?
Same for the D side.
Will
Powered by blists - more mailing lists