[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c0af7aa9-a991-4d20-a2bd-c6065f04fc3d@amd.com>
Date: Mon, 19 May 2025 22:35:11 +0530
From: Vasant Hegde <vasant.hegde@....com>
To: "Tian, Kevin" <kevin.tian@...el.com>, Nicolin Chen <nicolinc@...dia.com>
Cc: "jgg@...dia.com" <jgg@...dia.com>, "corbet@....net" <corbet@....net>,
"will@...nel.org" <will@...nel.org>,
"bagasdotme@...il.com" <bagasdotme@...il.com>,
"robin.murphy@....com" <robin.murphy@....com>,
"joro@...tes.org" <joro@...tes.org>,
"thierry.reding@...il.com" <thierry.reding@...il.com>,
"vdumpa@...dia.com" <vdumpa@...dia.com>,
"jonathanh@...dia.com" <jonathanh@...dia.com>,
"shuah@...nel.org" <shuah@...nel.org>,
"jsnitsel@...hat.com" <jsnitsel@...hat.com>,
"nathan@...nel.org" <nathan@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>, "Liu, Yi L"
<yi.l.liu@...el.com>, "mshavit@...gle.com" <mshavit@...gle.com>,
"praan@...gle.com" <praan@...gle.com>,
"zhangzekun11@...wei.com" <zhangzekun11@...wei.com>,
"iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"patches@...ts.linux.dev" <patches@...ts.linux.dev>,
"mochs@...dia.com" <mochs@...dia.com>,
"alok.a.tiwari@...cle.com" <alok.a.tiwari@...cle.com>
Subject: Re: [PATCH v4 10/23] iommufd/viommu: Introduce IOMMUFD_OBJ_HW_QUEUE
and its related struct
Kevin, Nicolin,
On 5/16/2025 8:29 AM, Tian, Kevin wrote:
>> From: Nicolin Chen <nicolinc@...dia.com>
>> Sent: Friday, May 16, 2025 10:30 AM
>>
>> On Thu, May 15, 2025 at 05:58:41AM +0000, Tian, Kevin wrote:
>>>> From: Nicolin Chen <nicolinc@...dia.com>
>>>> Sent: Friday, May 9, 2025 11:03 AM
>>>>
>>>> Add IOMMUFD_OBJ_HW_QUEUE with an iommufd_hw_queue structure,
>>>> representing
>>>> a HW-accelerated queue type of IOMMU's physical queue that can be
>> passed
>>>> through to a user space VM for direct hardware control, such as:
>>>> - NVIDIA's Virtual Command Queue
>>>> - AMD vIOMMU's Command Buffer, Event Log Buffer, and PPR Log Buffer
>>>>
>>>> Introduce an allocator iommufd_hw_queue_alloc(). And add a pair of
>>>> viommu
>>>> ops for iommufd to forward user space ioctls to IOMMU drivers.
>>>>
>>>> Given that the first user of this HW QUEUE (tegra241-cmdqv) will need to
>>>> ensure the queue memory to be physically contiguous, add a flag
>> property
>>>> in iommufd_viommu_ops and
>>>> IOMMUFD_VIOMMU_FLAG_HW_QUEUE_READS_PA to allow
>>>> driver to flag it so that the core will validate the physical pages of a
>>>> given guest queue.
>>>
>>> 'READS' is confusing here. What about xxx_CONTIG_PAS?
>>
>> Combining Jason's first comments here:
>> https://lore.kernel.org/linux-
>> iommu/20250515160620.GJ382960@...dia.com/
>>
>> So, pinning should be optional too. And I think there would be
>> unlikely a case where HW needs contiguous physical pages while
>> not requiring to pin the pages, right?
AMD IOMMU needs contiguous GPA space for buffer (like command buffer), not
contiguous physical address.
>>
>> So, we need an flag that could indicate to do both tests. Yet,
>> "xxx_CONTIG_PAS" doesn't sound very fitting, compared to this
>> "IOMMUFD_VIOMMU_FLAG_HW_QUEUE_READS_PA".
>>
>> Perhaps, we should just add some comments to clarify a bit. Or
>> do you have some better naming?
>>
>
> let's wait until that open is closed, i.e. whether we still let the core
> manage it and whether AMD requires pinning even when IOVA
> is used.
I think we may still want to pin those buffer address.
-Vasant
Powered by blists - more mailing lists