lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251007111606.00005849@linux.microsoft.com>
Date: Tue, 7 Oct 2025 11:16:06 -0700
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: <linux-kernel@...r.kernel.org>, "iommu@...ts.linux.dev"
 <iommu@...ts.linux.dev>, Will Deacon <will@...nel.org>, Jason Gunthorpe
 <jgg@...dia.com>, Robin Murphy <robin.murphy@....com>, Zhang Yu
 <zhangyu1@...ux.microsoft.com>, Jean Philippe-Brucker
 <jean-philippe@...aro.org>, Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness and
 efficiency

On Mon, 6 Oct 2025 18:08:14 -0700
Nicolin Chen <nicolinc@...dia.com> wrote:

> On Wed, Sep 24, 2025 at 10:54:38AM -0700, Jacob Pan wrote:
> >  static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
> >  {
> > -	int val;
> > -
> >  	/*
> > -	 * We can try to avoid the cmpxchg() loop by simply
> > incrementing the
> > -	 * lock counter. When held in exclusive state, the lock
> > counter is set
> > -	 * to INT_MIN so these increments won't hurt as the value
> > will remain
> > -	 * negative.
> > +	 * We can simply increment the lock counter. When held in
> > exclusive
> > +	 * state, the lock counter is set to INT_MIN so these
> > increments won't
> > +	 * hurt as the value will remain negative.  
> 
> It seems to me that the change at the first statement is not very
> necessary.
> 
I can delete "We can simply increment the lock counter." since it is
obvious. But the change to delete cmpxchg in the comment matches the
code change the follows.

> > This will also signal the
> > +	 * exclusive locker that there are shared waiters. Once
> > the exclusive
> > +	 * locker releases the lock, the sign bit will be cleared
> > and our
> > +	 * increment will make the lock counter positive, allowing
> > us to
> > +	 * proceed.
> >  	 */
> >  	if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
> >  		return;
> >  
> > -	do {
> > -		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >=
> > 0);
> > -	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1)
> > != val);
> > +	atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);  
> 
> The returned value is not captured for anything. Is this read()
> necessary? If so, a line of comments elaborating it?
We don't need the return value, how about this explanation?
/*
 * Someone else is holding the lock in exclusive state, so wait
 * for them to finish. Since we already incremented the lock counter,
 * no exclusive lock can be acquired until we finish. We don't need
 * the return value since we only care that the exclusive lock is
 * released (i.e. the lock counter is non-negative).
 */
> > +/*
> > + * Only clear the sign bit when releasing the exclusive lock this
> > will
> > + * allow any shared_lock() waiters to proceed without the
> > possibility
> > + * of entering the exclusive lock in a tight loop.
> > + */
> >  #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq,
> > flags)		\ ({
> > 				\
> > -	atomic_set_release(&cmdq->lock, 0);
> > 	\
> > +	atomic_fetch_and_release(~INT_MIN, &cmdq->lock);
> > 			\  
> 
> By a quick skim, the whole thing looks quite smart to me. But I
> need some time to revisit and perhaps test it as well.
> 
> It's also important to get feedback from Will. Both patches are
> touching his writing that has been running for years already..
Definitely, really appreciated your review. I think part of the reason
is that cmdq size is usually quite large, queue full is a rare case.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ