lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251020115710.0000258b@linux.microsoft.com>
Date: Mon, 20 Oct 2025 11:57:10 -0700
From: Jacob Pan <jacob.pan@...ux.microsoft.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Mostafa Saleh <smostafa@...gle.com>, linux-kernel@...r.kernel.org,
 "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>, Will Deacon
 <will@...nel.org>, Robin Murphy <robin.murphy@....com>, Nicolin Chen
 <nicolinc@...dia.com>, Zhang Yu <zhangyu1@...ux.microsoft.com>, Jean
 Philippe-Brucker <jean-philippe@...aro.org>, Alexander Grest
 <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH 0/2] SMMU v3 CMDQ fix and improvement

On Mon, 20 Oct 2025 09:02:40 -0300
Jason Gunthorpe <jgg@...dia.com> wrote:

> On Fri, Oct 17, 2025 at 09:50:31AM -0700, Jacob Pan wrote:
> > On Fri, 17 Oct 2025 10:51:45 -0300
> > Jason Gunthorpe <jgg@...dia.com> wrote:
> >   
> > > On Fri, Oct 17, 2025 at 10:57:52AM +0000, Mostafa Saleh wrote:  
> > > > On Wed, Sep 24, 2025 at 10:54:36AM -0700, Jacob Pan wrote:    
> > > > > Hi Will et al,
> > > > > 
> > > > > These two patches are derived from testing SMMU driver with
> > > > > smaller CMDQ sizes where we see soft lockups.
> > > > > 
> > > > > This happens on HyperV emulated SMMU v3 as well as baremetal
> > > > > ARM servers with artificially reduced queue size and
> > > > > microbenchmark to stress test concurrency.    
> > > > 
> > > > Is it possible to share what are the artificial sizes and does
> > > > the HW/emulation support range invalidation (IRD3.RIL)?
> > > > 
> > > > I'd expect it would be really hard to overwhelm the command
> > > > queue, unless the HW doesn't support range invalidation and/or
> > > > the queue entries are close to the number of CPUs.    
> > > 
> > > At least on Jacob's system there is no RIL and there are 72/144
> > > CPU cores potentially banging on this.
> > > 
> > > I think it is combination of lots of required invalidation
> > > commands, low queue depth and slow retirement of commands that
> > > make it easier to create a queue full condition.
> > > 
> > > Without RIL one SVA invalidation may take out the entire small
> > > queue, for example.  
> > Right, no range invalidation and queue depth is 256 in this case.  
> 
> I think Robin is asking you to justify why the queue depth is 256 when
> ARM is recommending much larger depths specifically to fix issues like
> this?
The smaller queue depth is chosen for CMD_SYNC latency reasons. But I
don't know the implementation details of HyperV and host SMMU driver.

IMHO, queue size is orthogonal to what this patch is trying to
address, which is a specific locking problem and improve efficiency.
e.g. eliminated cmpxchg
-	do {
-		val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
-	} while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
+	atomic_cond_read_relaxed(&cmdq->lock, VAL > 0);

Even on BM with restricted queue size, this patch reduces latency of
concurrent madvise(MADV_DONTNEED) from multiple CPUs (I tested 32 CPUs,
cutting 50% latency unmap 1GB buffer in 2MB chucks per CPU).


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ