[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJHc60wF19h9ixEQK5SS+oGA41xNqrFQaH9VE6sS0DXjQFCh7A@mail.gmail.com>
Date: Tue, 16 May 2023 11:54:55 -0700
From: Raghavendra Rao Ananta <rananta@...gle.com>
To: Oliver Upton <oliver.upton@...ux.dev>
Cc: Marc Zyngier <maz@...nel.org>, James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Ricardo Koller <ricarkol@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jing Zhang <jingzhangos@...gle.com>,
Colton Lewis <coltonlewis@...gle.com>,
linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH v3 7/7] KVM: arm64: Use TLBI range-based intructions for unmap
On Tue, May 16, 2023 at 11:46 AM Oliver Upton <oliver.upton@...ux.dev> wrote:
>
> On Tue, May 16, 2023 at 10:21:33AM -0700, Raghavendra Rao Ananta wrote:
> > On Fri, May 12, 2023 at 10:02 AM Oliver Upton <oliver.upton@...ux.dev> wrote:
> > > > int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size)
> > > > {
> > > > + int ret;
> > > > + struct stage2_unmap_data unmap_data = {
> > > > + .pgt = pgt,
> > > > + /*
> > > > + * If FEAT_TLBIRANGE is implemented, defer the individial PTE
> > > > + * TLB invalidations until the entire walk is finished, and
> > > > + * then use the range-based TLBI instructions to do the
> > > > + * invalidations. Condition this upon S2FWB in order to avoid
> > > > + * a page-table walk again to perform the CMOs after TLBI.
> > > > + */
> > > > + .skip_pte_tlbis = system_supports_tlb_range() &&
> > > > + stage2_has_fwb(pgt),
> > >
> > > Why can't the underlying walker just call these two helpers directly?
> > > There are static keys behind these...
> > >
> > I wasn't aware of that. Although, I tried to look into the
> > definitions, but couldn't understand how static keys are at play here.
> > By any chance are you referring to the alternative_has_feature_*()
> > implementations when checking for cpu capabilities?
>
> Ah, right, these were recently changed to rely on alternative patching
> in commit 21fb26bfb01f ("arm64: alternatives: add alternative_has_feature_*()").
> Even still, the significance remains as the alternative patching
> completely eliminates a conditional branch on the presence of a
> particular feature.
>
> Initializing a local with the presence/absence of a feature defeats such
> an optimization.
>
Thanks for the info! Introduction of stage2_unmap_defer_tlb_flush()
(in patch-7/7) would call these functions as needed and get rid of
'skip_pte_tlbis'.
- Raghavendra
> --
> Thanks,
> Oliver
Powered by blists - more mailing lists