[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251017135145.GL3901471@nvidia.com>
Date: Fri, 17 Oct 2025 10:51:45 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Mostafa Saleh <smostafa@...gle.com>
Cc: Jacob Pan <jacob.pan@...ux.microsoft.com>, linux-kernel@...r.kernel.org,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
Will Deacon <will@...nel.org>, Robin Murphy <robin.murphy@....com>,
Nicolin Chen <nicolinc@...dia.com>,
Zhang Yu <zhangyu1@...ux.microsoft.com>,
Jean Philippe-Brucker <jean-philippe@...aro.org>,
Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH 0/2] SMMU v3 CMDQ fix and improvement
On Fri, Oct 17, 2025 at 10:57:52AM +0000, Mostafa Saleh wrote:
> On Wed, Sep 24, 2025 at 10:54:36AM -0700, Jacob Pan wrote:
> > Hi Will et al,
> >
> > These two patches are derived from testing SMMU driver with smaller CMDQ
> > sizes where we see soft lockups.
> >
> > This happens on HyperV emulated SMMU v3 as well as baremetal ARM servers
> > with artificially reduced queue size and microbenchmark to stress test
> > concurrency.
>
> Is it possible to share what are the artificial sizes and does the HW/emulation
> support range invalidation (IRD3.RIL)?
>
> I'd expect it would be really hard to overwhelm the command queue, unless the
> HW doesn't support range invalidation and/or the queue entries are close to
> the number of CPUs.
At least on Jacob's system there is no RIL and there are 72/144 CPU
cores potentially banging on this.
I think it is combination of lots of required invalidation commands,
low queue depth and slow retirement of commands that make it easier to
create a queue full condition.
Without RIL one SVA invalidation may take out the entire small queue,
for example.
Jason
Powered by blists - more mailing lists