[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <336c5a8a-ac2c-4e4c-b2f9-a0bc056aa498@arm.com>
Date: Tue, 21 Oct 2025 12:45:48 +0100
From: Robin Murphy <robin.murphy@....com>
To: Jacob Pan <jacob.pan@...ux.microsoft.com>
Cc: Mostafa Saleh <smostafa@...gle.com>, linux-kernel@...r.kernel.org,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
Will Deacon <will@...nel.org>, Nicolin Chen <nicolinc@...dia.com>,
Zhang Yu <zhangyu1@...ux.microsoft.com>,
Jean Philippe-Brucker <jean-philippe@...aro.org>,
Alexander Grest <Alexander.Grest@...rosoft.com>,
Jason Gunthorpe <jgg@...dia.com>
Subject: Re: [PATCH 0/2] SMMU v3 CMDQ fix and improvement
On 2025-10-20 7:57 pm, Jacob Pan wrote:
> On Mon, 20 Oct 2025 09:02:40 -0300
> Jason Gunthorpe <jgg@...dia.com> wrote:
>
>> On Fri, Oct 17, 2025 at 09:50:31AM -0700, Jacob Pan wrote:
>>> On Fri, 17 Oct 2025 10:51:45 -0300
>>> Jason Gunthorpe <jgg@...dia.com> wrote:
>>>
>>>> On Fri, Oct 17, 2025 at 10:57:52AM +0000, Mostafa Saleh wrote:
>>>>> On Wed, Sep 24, 2025 at 10:54:36AM -0700, Jacob Pan wrote:
>>>>>> Hi Will et al,
>>>>>>
>>>>>> These two patches are derived from testing SMMU driver with
>>>>>> smaller CMDQ sizes where we see soft lockups.
>>>>>>
>>>>>> This happens on HyperV emulated SMMU v3 as well as baremetal
>>>>>> ARM servers with artificially reduced queue size and
>>>>>> microbenchmark to stress test concurrency.
>>>>>
>>>>> Is it possible to share what are the artificial sizes and does
>>>>> the HW/emulation support range invalidation (IRD3.RIL)?
>>>>>
>>>>> I'd expect it would be really hard to overwhelm the command
>>>>> queue, unless the HW doesn't support range invalidation and/or
>>>>> the queue entries are close to the number of CPUs.
>>>>
>>>> At least on Jacob's system there is no RIL and there are 72/144
>>>> CPU cores potentially banging on this.
>>>>
>>>> I think it is combination of lots of required invalidation
>>>> commands, low queue depth and slow retirement of commands that
>>>> make it easier to create a queue full condition.
>>>>
>>>> Without RIL one SVA invalidation may take out the entire small
>>>> queue, for example.
>>> Right, no range invalidation and queue depth is 256 in this case.
>>
>> I think Robin is asking you to justify why the queue depth is 256 when
>> ARM is recommending much larger depths specifically to fix issues like
>> this?
> The smaller queue depth is chosen for CMD_SYNC latency reasons. But I
> don't know the implementation details of HyperV and host SMMU driver.
TBH that sounds highly dubious. The only way I could imagine CMDQ size
bearing any relation to CMD_SYNC at all is if a hypervisor is emulating
a stage 1 vCMDQ in such a naive and lazy manner that a) performance is
already off the table, and b) it has a good chance of being broken anyway.
For the hardware to actually process, say, 1023 invalidations followed
by a sync takes as long as it takes, based on how busy the SMMU is. The
only difference in issuing that sequence of commands on a 256-entry
queue vs. a 1024-entry queue is that in the latter case, software does
not have to sit waiting for the first 768 to actually be consumed before
it can finish the submission and potentially get on with something else
until the sync completes. Yes, one could claim that technically the time
between *issuing* the CMD_SYNC and its completion is then lower, but
only because that extra time has now been wasted in a polling loop
waiting for CMDQ space instead - it's a meaningless distinction overall.
> IMHO, queue size is orthogonal to what this patch is trying to
> address, which is a specific locking problem and improve efficiency.
> e.g. eliminated cmpxchg
> - do {
> - val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
> - } while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
> + atomic_cond_read_relaxed(&cmdq->lock, VAL > 0);
>
> Even on BM with restricted queue size, this patch reduces latency of
> concurrent madvise(MADV_DONTNEED) from multiple CPUs (I tested 32 CPUs,
> cutting 50% latency unmap 1GB buffer in 2MB chucks per CPU).
My point is that a 50% improvement on nonsense is likely still nonsense.
With only 256 entries, every single one of those 2MB unmaps needs to
fill the entire CMDQ more than twice over. 32 CPUs all jostling to issue
about 34x as many commands as the queue can hold *each* is a ridiculous
level of contention. If a hypervisor is advertising an SMMU_IDR1.CMDQS
value that is absurdly tiny for the size of the VM then that represents
such an obvious bottleneck that it's hardly mainline Linux's
responsibility to maintain code to help "debug" it. As for "BM with
restricted queue size", like I said, just don't do that.
What is the difference on an un-hacked bare-metal system with a
normally-sized queue? Is it even measurable? That's what's actually
interesting. Furthermore, what exactly does that measurement even mean?
If we're still issuing the same number of commands I struggle to believe
we could lose 50% of the *overall* time just bouncing a cacheline
between shared and exclusive state - is this actually just the *maximum*
per-CPU latency going down, at the cost of the minimum latency
correspondingly increasing just as much (if not comparatively more) due
to better fairness? And if so, how important is that really? I can
imagine there are equally cases where other callers might prefer a lower
minimum/mean latency at the price of some longer outliers.
Note I'm not saying I'm necessarily against making these changes, just
that I'm against making them without a believable justification that it
is actually beneficial to mainline users.
Thanks,
Robin.
Powered by blists - more mailing lists