[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aXjqBy2sWAUMm/aY@Asurada-Nvidia>
Date: Tue, 27 Jan 2026 08:38:31 -0800
From: Nicolin Chen <nicolinc@...dia.com>
To: <jgg@...dia.com>, Pranjal Shrivastava <praan@...gle.com>
CC: <will@...nel.org>, <jean-philippe@...aro.org>, <robin.murphy@....com>,
<joro@...tes.org>, <balbirs@...dia.com>, <miko.lenczewski@....com>,
<peterz@...radead.org>, <kevin.tian@...el.com>,
<linux-arm-kernel@...ts.infradead.org>, <iommu@...ts.linux.dev>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v9 6/7] iommu/arm-smmu-v3: Add arm_smmu_invs based
arm_smmu_domain_inv_range()
Hi Pranjal,
Sorry, I missed this!
On Fri, Jan 23, 2026 at 09:48:37AM +0000, Pranjal Shrivastava wrote:
> On Fri, Dec 19, 2025 at 12:11:28PM -0800, Nicolin Chen wrote:
> > + /*
> > + * Avoid locking unless ATS is being used. No ATC invalidation can be
> > + * going on after a domain is detached.
> > + */
> > + if (invs->has_ats) {
> > + read_lock(&invs->rwlock);
>
> Shouldn't these be read_lock_irqsave for all rwlock variants here?
> Invalidations might happen in IRQ context as well..
>
> > + __arm_smmu_domain_inv_range(invs, iova, size, granule, leaf);
> > + read_unlock(&invs->rwlock);
It was kept from the older versions where we had a trylock. Jason
had an insight about this, mainly for less latency on invalidation
threads.
Yet, now we have a plain locking. TBH, I can't find a good reason
justifying this. And it does look a bit unsafe to me. So, I think
I will just change to the _irqsave version. (Jason?)
Thanks
Nicolin
Powered by blists - more mailing lists