lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ab4d0c39-c6cb-4242-8e4c-479a684fb14d@arm.com>
Date: Fri, 17 Oct 2025 15:44:49 +0100
From: Robin Murphy <robin.murphy@....com>
To: Jason Gunthorpe <jgg@...dia.com>, Mostafa Saleh <smostafa@...gle.com>
Cc: Jacob Pan <jacob.pan@...ux.microsoft.com>, linux-kernel@...r.kernel.org,
 "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
 Will Deacon <will@...nel.org>, Nicolin Chen <nicolinc@...dia.com>,
 Zhang Yu <zhangyu1@...ux.microsoft.com>,
 Jean Philippe-Brucker <jean-philippe@...aro.org>,
 Alexander Grest <Alexander.Grest@...rosoft.com>
Subject: Re: [PATCH 0/2] SMMU v3 CMDQ fix and improvement

On 2025-10-17 2:51 pm, Jason Gunthorpe wrote:
> On Fri, Oct 17, 2025 at 10:57:52AM +0000, Mostafa Saleh wrote:
>> On Wed, Sep 24, 2025 at 10:54:36AM -0700, Jacob Pan wrote:
>>> Hi Will et al,
>>>
>>> These two patches are derived from testing SMMU driver with smaller CMDQ
>>> sizes where we see soft lockups.
>>>
>>> This happens on HyperV emulated SMMU v3 as well as baremetal ARM servers
>>> with artificially reduced queue size and microbenchmark to stress test
>>> concurrency.
>>
>> Is it possible to share what are the artificial sizes and does the HW/emulation
>> support range invalidation (IRD3.RIL)?
>>
>> I'd expect it would be really hard to overwhelm the command queue, unless the
>> HW doesn't support range invalidation and/or the queue entries are close to
>> the number of CPUs.
> 
> At least on Jacob's system there is no RIL and there are 72/144 CPU
> cores potentially banging on this.
> 
> I think it is combination of lots of required invalidation commands,
> low queue depth and slow retirement of commands that make it easier to
> create a queue full condition.
> 
> Without RIL one SVA invalidation may take out the entire small queue,
> for example.

Indeed once real hardware first started to arrive, we found that even 
just 4 NVMe queues doing ~8MB DMA unmaps with a modestly-clocked MMU-600 
were capable of keeping a 256-entry CMDQ full enough to occasionally hit 
this timeout with the original spinlock. Which is precisely why, as well 
as introducing the new lock-free algorithm, we also stopped limiting the 
CMDQ to 4KB 6 years ago.

Yes, if the queue is contrived to only be big enough to hold 3 or fewer 
commands per CPU, one can expect catastrophic levels of contention even 
with RIL. However since that requires going out of the way to hack the 
driver (and/or hypervisor emulation) to force a clearly unrealistic 
behaviour, I would say the best solution to that particular problem is 
"stop doing that".

If significant contention is visible in real-world workloads, that would 
be more of a concern of interest.

Thanks,
Robin.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ