[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250704125012.GK1209783@nvidia.com>
Date: Fri, 4 Jul 2025 09:50:12 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: Pranjal Shrivastava <praan@...gle.com>
Cc: Nicolin Chen <nicolinc@...dia.com>, kevin.tian@...el.com,
corbet@....net, will@...nel.org, bagasdotme@...il.com,
robin.murphy@....com, joro@...tes.org, thierry.reding@...il.com,
vdumpa@...dia.com, jonathanh@...dia.com, shuah@...nel.org,
jsnitsel@...hat.com, nathan@...nel.org, peterz@...radead.org,
yi.l.liu@...el.com, mshavit@...gle.com, zhangzekun11@...wei.com,
iommu@...ts.linux.dev, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-tegra@...r.kernel.org, linux-kselftest@...r.kernel.org,
patches@...ts.linux.dev, mochs@...dia.com, alok.a.tiwari@...cle.com,
vasant.hegde@....com, dwmw2@...radead.org, baolu.lu@...ux.intel.com
Subject: Re: [PATCH v7 27/28] iommu/tegra241-cmdqv: Add user-space use support
On Thu, Jul 03, 2025 at 06:48:42PM +0000, Pranjal Shrivastava wrote:
> Ahh, thanks for this, that saved a lot of my time! And yes, I see some
> functions in eventq.c calling the iopf_group_response which settles the
> CMD_RESUME. So.. I assume these resume commands would be trapped and
> *actually* executed through this or a similar path for vPRI.
Yes, that is what Intel did. PRI has to be tracked in the kernel
because we have to ack requests eventually. If the VMM crashes the
kernel has to ack everything and try to clean up.
Also SMMUv3 does not support PRI today, just stall.
Jason
Powered by blists - more mailing lists