lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251103170848.000023aa@linux.microsoft.com>
Date: Mon, 3 Nov 2025 17:08:48 -0800
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: <linux-kernel@...r.kernel.org>, "iommu@...ts.linux.dev"
 <iommu@...ts.linux.dev>, Will Deacon <will@...nel.org>, Joerg Roedel
 <joro@...tes.org>, Mostafa Saleh <smostafa@...gle.com>, Jason Gunthorpe
 <jgg@...dia.com>, Robin Murphy <robin.murphy@....com>, Zhang Yu
 <zhangyu1@...ux.microsoft.com>, Jean Philippe-Brucker
 <jean-philippe@...aro.org>, Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH v2 2/2] iommu/arm-smmu-v3: Improve CMDQ lock fairness
 and efficiency

Hi Nicolin,

On Thu, 30 Oct 2025 19:00:02 -0700
Nicolin Chen <nicolinc@...dia.com> wrote:

> On Mon, Oct 20, 2025 at 03:43:53PM -0700, Jacob Pan wrote:
> > From: Alexander Grest <Alexander.Grest@...rosoft.com>
> > 
> > The SMMU CMDQ lock is highly contentious when there are multiple
> > CPUs issuing commands on an architecture with small queue sizes e.g
> > 256 entries.  
> 
> As Robin pointed out that 256 entry itself is not quite normal,
> the justification here might still not be very convincing..
> 
> I'd suggest to avoid saying "an architecture with a small queue
> sizes, but to focus on the issue itself -- potential starvation.
> "256-entry" can be used a testing setup to reproduce the issue.
> 
> > The lock has the following states:
> >  - 0:		Unlocked  
> >  - >0:		Shared lock held with count  
> >  - INT_MIN+N:	Exclusive lock held, where N is the # of
> > shared waiters
> >  - INT_MIN:	Exclusive lock held, no shared waiters
> > 
> > When multiple CPUs are polling for space in the queue, they attempt
> > to grab the exclusive lock to update the cons pointer from the
> > hardware. If they fail to get the lock, they will spin until either
> > the cons pointer is updated by another CPU.
> > 
> > The current code allows the possibility of shared lock starvation
> > if there is a constant stream of CPUs trying to grab the exclusive
> > lock. This leads to severe latency issues and soft lockups.  
> 
> It'd be nicer to have a graph to show how the starvation might
> happen due to a race:
> 
> CPU0 (exclusive)  | CPU1 (shared)     | CPU2 (exclusive)    |
> `cmdq->lock`
> --------------------------------------------------------------------------
> trylock() //takes |                   |                     | 0 |
> shared_lock()     |                     | INT_MIN | fetch_inc()
> |                     | INT_MIN | no return         |
>     | INT_MIN + 1 | spins // VAL >= 0 |                     | INT_MIN
> + 1 unlock()          | spins...          |                     |
> INT_MIN + 1 set_release(0)    | spins...          |
>   | 0  <-- BUG? 
Not sure we can call it a bug but it definitely opens the door for
starving shared lock.

>(done)            | (sees 0)          | trylock() //
> takes  | 0 | *exits loop*      | cmpxchg(0, INT_MIN) | 0
>                   |                   | *cuts in*           | INT_MIN
>                   | cmpxchg(0, 1)     |                     | INT_MIN
>                   | fails // != 0     |                     | INT_MIN
>                   | spins // VAL >= 0 |                     | INT_MIN
>                   | *starved*         |                     | INT_MIN
>
Thanks for the graph, will incorporate. The starved shared lock also
prevents advancing cmdq which perpetuate the situation of
!queue_has_space(&llq, n + sync)
 
> And point it out that it should have reserved the "+1" from CPU1
> instead of nuking the entire cmdq->lock to 0.
> 
Will do. reserved the "+1" is useful to prevent back to back exclusive
lock acquisition. Nuking to 0 wasted such info.

> > In a staged test where 32 CPUs issue SVA invalidations
> > simultaneously on a system with a 256 entry queue, the madvise
> > (MADV_DONTNEED) latency dropped by 50% with this patch and without
> > soft lockups.  
> 
> This might not be very useful per Robin's remarks. I'd drop it.
> 
Will do.

> > Reviewed-by: Mostafa Saleh <smostafa@...gle.com>
> > Signed-off-by: Alexander Grest <Alexander.Grest@...rosoft.com>
> > Signed-off-by: Jacob Pan <jacob.pan@...ux.microsoft.com>  
> 
> Reviewed-by: Nicolin Chen <nicolinc@...dia.com>
> 
> > @@ -500,9 +506,14 @@ static bool
> > arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
> > __ret;
> > 	\ }) 
> > +/*
> > + * Only clear the sign bit when releasing the exclusive lock this
> > will
> > + * allow any shared_lock() waiters to proceed without the
> > possibility
> > + * of entering the exclusive lock in a tight loop.
> > + */
> >  #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq,
> > flags)		\ ({
> > 				\
> > -	atomic_set_release(&cmdq->lock, 0);
> > 	\
> > +	atomic_fetch_and_release(~INT_MIN, &cmdq->lock);
> > 			\  
> 
> Align the tailing spacing with other lines please.
> 
> Nicolin


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ