[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aK8+FvkbpB9G9YA+@Asurada-Nvidia>
Date: Wed, 27 Aug 2025 10:19:18 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: Jason Gunthorpe <jgg@...dia.com>
CC: <will@...nel.org>, <robin.murphy@....com>, <joro@...tes.org>,
<jean-philippe@...aro.org>, <miko.lenczewski@....com>, <balbirs@...dia.com>,
<peterz@...radead.org>, <smostafa@...gle.com>, <kevin.tian@...el.com>,
<praan@...gle.com>, <zhangzekun11@...wei.com>,
<linux-arm-kernel@...ts.infradead.org>, <iommu@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>, <patches@...ts.linux.dev>
Subject: Re: [PATCH rfcv1 4/8] iommu/arm-smmu-v3: Introduce a per-domain
arm_smmu_invs array
On Wed, Aug 27, 2025 at 01:48:04PM -0300, Jason Gunthorpe wrote:
> On Wed, Aug 13, 2025 at 06:25:35PM -0700, Nicolin Chen wrote:
> > +/**
> > + * arm_smmu_invs_del() - Remove @del_invs from @old_invs
> > + * @old_invs: the old invalidation array
> > + * @del_invs: an array of invlidations to delete
> > + *
> > + * Return: a newly allocated and sorted invalidation array on success, or an
> > + * ERR_PTR.
> > + *
> > + * This function must be locked and serialized with arm_smmu_invs_add/dec(),
> > + * but do not lockdep on any lock for KUNIT test.
> > + *
> > + * Caller is resposible for freeing the @old_invs and the returned one.
> > + *
> > + * Entries marked as trash will be completely removed in the returned array.
> > + */
> > +VISIBLE_IF_KUNIT
> > +struct arm_smmu_invs *arm_smmu_invs_del(struct arm_smmu_invs *old_invs,
> > + struct arm_smmu_invs *del_invs)
> > +{
>
> Having looked at this more completely, I think we should drop this
> function.
>
> Just always do decr, then have a simple function to compact the list
> after the decr:
But the _dec function will always take the write lock, which seems
to lose the benefit of using an RCU array?
The complexity of the _dec function wouldn't release the lock very
quickly. The current version only invokes it upon kmalloc failure,
which can be seem as a very rare case having a very minimal impact.
> struct arm_smmu_invs *arm_smmu_invs_cleanup(struct arm_smmu_invs *invs,
> size_t to_del)
> {
> struct arm_smmu_invs *new_invs;
> size_t i, j;
>
> if (WARN_ON(invs->num_invs < to_del))
> return NULL;
>
> new_invs = arm_smmu_invs_alloc(invs->num_invs - to_del);
> if (IS_ERR(new_invs))
> return NULL;
>
> for (i = 0, j = 0; i != invs->num_invs; i++) {
> if (!refcount_read(&invs->inv[i].users))
> continue;
> new_invs->inv[j] = invs->inv[i];
> j++;
> }
> return new_invs;
> }
>
> If this returns NULL then just leave the list alone, it is OK to sit
> there with the 0 users left behind.
Yea, it's better than waiting for the next _add function to compact
the list.
> No need for the complex _del function and the _decr function..
>
> This also means the memory doesn't need to be preallocated and it
> significantly simplifies alot of the logic.
By "preallocated" you mean "master->scratch_invs"? I think it will
be still needed for the "dec_invs" argument in:
size_t arm_smmu_invs_dec(struct arm_smmu_invs *invs,
struct arm_smmu_invs *dec_invs);
?
Or you mean the allocation in arm_smmu_invs_del()? Yes, this drops
the kmalloc in the detach path.
Thanks
Nicolin
Powered by blists - more mailing lists