[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251216090926.GR3707837@noisy.programming.kicks-ass.net>
Date: Tue, 16 Dec 2025 10:09:26 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: will@...nel.org, jean-philippe@...aro.org, robin.murphy@....com,
joro@...tes.org, jgg@...dia.com, balbirs@...dia.com,
miko.lenczewski@....com, kevin.tian@...el.com, praan@...gle.com,
linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based
arm_smmu_domain_inv_range()
On Mon, Dec 15, 2025 at 06:09:35PM -0800, Nicolin Chen wrote:
> +void arm_smmu_domain_inv_range(struct arm_smmu_domain *smmu_domain,
> + unsigned long iova, size_t size,
> + unsigned int granule, bool leaf)
> +{
> + struct arm_smmu_invs *invs;
> +
> + /*
> + * An invalidation request must follow some IOPTE change and then load
> + * an invalidation array. In the meantime, a domain attachment mutates
> + * the array and then stores an STE/CD asking SMMU HW to acquire those
> + * changed IOPTEs. In other word, these two are interdependent and can
> + * race.
> + *
> + * In a race, the RCU design (with its underlying memory barriers) can
> + * ensure the invalidation array to always get updated before loaded.
> + *
> + * smp_mb() is used here, paired with the smp_mb() following the array
> + * update in a concurrent attach, to ensure:
> + * - HW sees the new IOPTEs if it walks after STE installation
> + * - Invalidation thread sees the updated array with the new ASID.
> + *
> + * [CPU0] | [CPU1]
> + * |
> + * change IOPTEs and TLB flush: |
> + * arm_smmu_domain_inv_range() { | arm_smmu_install_new_domain_invs {
> + * ... | rcu_assign_pointer(new_invs);
> + * smp_mb(); // ensure IOPTEs | smp_mb(); // ensure new_invs
> + * ... | kfree_rcu(old_invs, rcu);
> + * // load invalidation array | }
> + * invs = rcu_dereference(); | arm_smmu_install_ste_for_dev {
> + * | STE = TTB0 // read new IOPTEs
> + */
> + smp_mb();
> +
> + rcu_read_lock();
> + invs = rcu_dereference(smmu_domain->invs);
> +
> + /*
> + * Avoid locking unless ATS is being used. No ATC invalidation can be
> + * going on after a domain is detached.
> + */
> + if (invs->has_ats) {
> + read_lock(&invs->rwlock);
> + __arm_smmu_domain_inv_range(invs, iova, size, granule, leaf);
> + read_unlock(&invs->rwlock);
> + } else {
> + __arm_smmu_domain_inv_range(invs, iova, size, granule, leaf);
> + }
> +
> + rcu_read_unlock();
> +}
> +
> static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
> unsigned long iova, size_t granule,
> void *cookie)
> @@ -3280,6 +3478,12 @@ arm_smmu_install_new_domain_invs(struct arm_smmu_attach_state *state)
> return;
>
> rcu_assign_pointer(*invst->invs_ptr, invst->new_invs);
> + /*
> + * We are committed to updating the STE. Ensure the invalidation array
> + * is visable to concurrent map/unmap threads, and acquire any racying
> + * IOPTE updates.
> + */
> + smp_mb();
> kfree_rcu(invst->old_invs, rcu);
> }
s/visable/visible/ s/racying/racing/
Anyway, if I understand the above correctly, the smb_mb() is for:
arm_smmu_domain_inv_range() arm_smmu_install_new_domain_invs()
[W] IOPTE [Wrel] smmu_domain->invs
smp_mb() smp_mb()
[Lacq] smmu_domain->invs [L] IOPTE
Right? But I'm not sure about your 'HW sees the new IOPTEs' claim; that
very much depend on what coherency domain the relevant hardware plays
in. For smp_mb() to work, the hardware must be in the ISH domain, while
typically devices are (if I remember my arrrrgh64 correctly) in the OSH.
Please clarify and all that ;-)
Powered by blists - more mailing lists