[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250703175532.GF1209783@nvidia.com>
Date: Thu, 3 Jul 2025 14:55:32 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Pranjal Shrivastava <praan@...gle.com>
Cc: Nicolin Chen <nicolinc@...dia.com>, kevin.tian@...el.com,
corbet@....net, will@...nel.org, bagasdotme@...il.com,
robin.murphy@....com, joro@...tes.org, thierry.reding@...il.com,
vdumpa@...dia.com, jonathanh@...dia.com, shuah@...nel.org,
jsnitsel@...hat.com, nathan@...nel.org, peterz@...radead.org,
yi.l.liu@...el.com, mshavit@...gle.com, zhangzekun11@...wei.com,
iommu@...ts.linux.dev, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-tegra@...r.kernel.org, linux-kselftest@...r.kernel.org,
patches@...ts.linux.dev, mochs@...dia.com, alok.a.tiwari@...cle.com,
vasant.hegde@....com, dwmw2@...radead.org, baolu.lu@...ux.intel.com
Subject: Re: [PATCH v7 27/28] iommu/tegra241-cmdqv: Add user-space use support
On Thu, Jul 03, 2025 at 02:46:03PM +0000, Pranjal Shrivastava wrote:
> Right.. I was however hoping we'd also trap commands like CMD_PRI_RESP
> and CMD_RESUME...I'm not sure if they should be accelerated via CMDQV..
> I guess I'll need to look and understand a little more if they are..
Right now these commands are not supported by vSMMUv3 in Linux.
They probably should be trapped, but completing a PRI (or resuming a
stall which we will treat the same) will go through the PRI/page fault
logic in iommufd not the cache invalidate.
> > The goal of the SMMU driver when it detects CMDQV support is to route
> > all supported invalidations to CMDQV queues and then balance those
> > queues across CPUs to reduce lock contention.
>
> I see.. that makes sense.. so it's a relatively small gain (but a nice
> one). Thanks for clarifying!
On bare metal the gain is small (due to locking and balancing), while
on virtualization the gain is huge (due to no trapping).
Regardless the SMMU driver uses cmdqv support if the HW says it is
there.
Jason
Powered by blists - more mailing lists