lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aNrxjkUEEUzKU+za@Asurada-Nvidia>
Date: Mon, 29 Sep 2025 13:52:30 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <will@...nel.org>, <robin.murphy@....com>, <joro@...tes.org>,
	<jean-philippe@...aro.org>, <miko.lenczewski@....com>, <balbirs@...dia.com>,
	<peterz@...radead.org>, <smostafa@...gle.com>, <kevin.tian@...el.com>,
	<praan@...gle.com>, <linux-arm-kernel@...ts.infradead.org>,
	<iommu@...ts.linux.dev>, <linux-kernel@...r.kernel.org>,
	<patches@...ts.linux.dev>
Subject: Re: [PATCH rfcv2 6/8] iommu/arm-smmu-v3: Populate smmu_domain->invs
 when attaching masters

On Wed, Sep 24, 2025 at 06:42:15PM -0300, Jason Gunthorpe wrote:
> On Mon, Sep 08, 2025 at 04:27:00PM -0700, Nicolin Chen wrote:
> > Update the invs array with the invalidations required by each domain type
> > during attachment operations.
> > 
> > Only an SVA domain or a paging domain will have an invs array:
> >  a. SVA domain will add an INV_TYPE_S1_ASID per SMMU and an INV_TYPE_ATS
> >     per SID
> > 
> >  b. Non-nesting-parent paging domain with no ATS-enabled master will add
> >     a single INV_TYPE_S1_ASID or INV_TYPE_S2_VMID per SMMU
> > 
> >  c. Non-nesting-parent paging domain with ATS-enabled master(s) will do
> >     (b) and add an INV_TYPE_ATS per SID
> > 
> >  d. Nesting-parent paging domain will add an INV_TYPE_S2_VMID followed by
> >     an INV_TYPE_S2_VMID_S1_CLEAR per vSMMU. For an ATS-enabled master, it
> >     will add an INV_TYPE_ATS_FULL per SID
> 
> Just some minor forward looking clarification - this behavior should
> be triggered when a nest-parent is attached through the viommu using
> a nesting domain with a vSTE.
> 
> A nesting-parent that is just normally attached should act like a
> normal S2 since it does not and can not have a two stage S1 on top of
> it.
> 
> We can't quite get there yet until the next series of changing how the
> VMID allocation works.

Yea, you are right. Let's add this:

Note that case #d is for the case when a nesting parent domain is attached
through a vSMMU instance using a nested domain carrying a vSTE. This model
will allocate a VMID per vSMMU instance v.s. the current driver allocating
per S2 domain. So, this requires a few more patches for S2 domain sharing.

> > The per-domain invalidation is not needed, until the domain is attached to
> > a master, i.e. a possible translation request. Giving this clears a way to
> > allowing the domain to be attached to many SMMUs, and avoids any pointless
> > invalidation overheads during a teardown if there are no STE/CDs referring
> > to the domain. This also means, when the last device is detached, the old
> > domain must flush its ASID or VMID because any iommu_unmap() call after it
> > wouldn't initiate any invalidation given an empty domain invs array.
> 
> Grammar/phrasing in this paragraph

OK. I asked AI to rewrite it:

The per-domain invalidation is not needed until the domain is attached to
a master (when it starts to possibly use TLB). This will make it possible
to attach the domain to multiple SMMUs and avoid unnecessary invalidation
overhead during teardown if no STEs/CDs refer to the domain. It also means
that when the last device is detached, the old domain must flush its ASID
or VMID, since any new iommu_unmap() call would not trigger invalidations
given an empty domain->invs array.

> > @@ -1183,8 +1183,11 @@ size_t arm_smmu_invs_unref(struct arm_smmu_invs *invs,
> >  			i++;
> >  		} else if (cmp == 0) {
> >  			/* same item */
> > -			if (refcount_dec_and_test(&invs->inv[i].users))
> > +			if (refcount_dec_and_test(&invs->inv[i].users)) {
> > +				/* Notify the caller about this deletion */
> > +				refcount_set(&to_unref->inv[j].users, 1);
> >  				num_dels++;
> 
> This is a bit convoluted. Instead of marking the entry and then
> iterating the list again just directly call a function to do the
> invalidation.

OK. If we want to generalize this arm_smmu_invs_unref function,
I suppose we will need to pass in a callback function pointer.

> > +	if (!new_invs) {
> > +		size_t new_num = old_invs->num_invs;
> > +
> > +		/*
> > +		 * OOM. Couldn't make a copy. Leave the array unoptimized. But
> > +		 * trim its size if some tailing entries are marked as trash.
> > +		 */
> > +		while (new_num != 0) {
> > +			if (refcount_read(&old_invs->inv[new_num - 1].users))
> > +				break;
> > +			new_num--;
> > +		}
> 
> Would be nicer to have arm_smmu_invs_unref return the new size so we
> don't need this loop

The "new size" must be invs->num_invs subtracting the number of
the tailing trash entries. So, arm_smmu_invs_unref() would have
to have the same loop validating the tailing entries, right?

(I will address all other comments as well)

Thanks!
Nicolin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ