[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5ca5e4ed-82f0-369b-db61-7fcd1c148f1c@redhat.com>
Date: Fri, 11 Aug 2023 11:13:46 +0800
From: Shaoqin Huang <shahuang@...hat.com>
To: Raghavendra Rao Ananta <rananta@...gle.com>,
Oliver Upton <oliver.upton@...ux.dev>,
Marc Zyngier <maz@...nel.org>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Huacai Chen <chenhuacai@...nel.org>,
Zenghui Yu <yuzenghui@...wei.com>,
Anup Patel <anup@...infault.org>,
Atish Patra <atishp@...shpatra.org>,
Jing Zhang <jingzhangos@...gle.com>,
Reiji Watanabe <reijiw@...gle.com>,
Colton Lewis <coltonlewis@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Fuad Tabba <tabba@...gle.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-mips@...r.kernel.org, kvm-riscv@...ts.infradead.org,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v8 14/14] KVM: arm64: Use TLBI range-based intructions for
unmap
On 8/9/23 07:13, Raghavendra Rao Ananta wrote:
> The current implementation of the stage-2 unmap walker traverses
> the given range and, as a part of break-before-make, performs
> TLB invalidations with a DSB for every PTE. A multitude of this
> combination could cause a performance bottleneck on some systems.
>
> Hence, if the system supports FEAT_TLBIRANGE, defer the TLB
> invalidations until the entire walk is finished, and then
> use range-based instructions to invalidate the TLBs in one go.
> Condition deferred TLB invalidation on the system supporting FWB,
> as the optimization is entirely pointless when the unmap walker
> needs to perform CMOs.
>
> Rename stage2_put_pte() to stage2_unmap_put_pte() as the function
> now serves the stage-2 unmap walker specifically, rather than
> acting generic.
>
> Signed-off-by: Raghavendra Rao Ananta <rananta@...gle.com>
Reviewed-by: Shaoqin Huang <shahuang@...hat.com>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 40 +++++++++++++++++++++++++++++-------
> 1 file changed, 33 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 5ef098af17362..eaaae76481fa9 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -831,16 +831,36 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n
> smp_store_release(ctx->ptep, new);
> }
>
> -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu,
> - struct kvm_pgtable_mm_ops *mm_ops)
> +static bool stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt)
> {
> /*
> - * Clear the existing PTE, and perform break-before-make with
> - * TLB maintenance if it was valid.
> + * If FEAT_TLBIRANGE is implemented, defer the individual
> + * TLB invalidations until the entire walk is finished, and
> + * then use the range-based TLBI instructions to do the
> + * invalidations. Condition deferred TLB invalidation on the
> + * system supporting FWB as the optimization is entirely
> + * pointless when the unmap walker needs to perform CMOs.
> + */
> + return system_supports_tlb_range() && stage2_has_fwb(pgt);
> +}
> +
> +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx,
> + struct kvm_s2_mmu *mmu,
> + struct kvm_pgtable_mm_ops *mm_ops)
> +{
> + struct kvm_pgtable *pgt = ctx->arg;
> +
> + /*
> + * Clear the existing PTE, and perform break-before-make if it was
> + * valid. Depending on the system support, defer the TLB maintenance
> + * for the same until the entire unmap walk is completed.
> */
> if (kvm_pte_valid(ctx->old)) {
> kvm_clear_pte(ctx->ptep);
> - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level);
> +
> + if (!stage2_unmap_defer_tlb_flush(pgt))
> + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu,
> + ctx->addr, ctx->level);
> }
>
> mm_ops->put_page(ctx->ptep);
> @@ -1098,7 +1118,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
> * block entry and rely on the remaining portions being faulted
> * back lazily.
> */
> - stage2_put_pte(ctx, mmu, mm_ops);
> + stage2_unmap_put_pte(ctx, mmu, mm_ops);
>
> if (need_flush && mm_ops->dcache_clean_inval_poc)
> mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops),
> @@ -1112,13 +1132,19 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
>
> int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
> {
> + int ret;
> struct kvm_pgtable_walker walker = {
> .cb = stage2_unmap_walker,
> .arg = pgt,
> .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST,
> };
>
> - return kvm_pgtable_walk(pgt, addr, size, &walker);
> + ret = kvm_pgtable_walk(pgt, addr, size, &walker);
> + if (stage2_unmap_defer_tlb_flush(pgt))
> + /* Perform the deferred TLB invalidations */
> + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size);
> +
> + return ret;
> }
>
> struct stage2_attr_data {
--
Shaoqin
Powered by blists - more mailing lists