[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCa5dLKso2WXliut@Asurada-Nvidia>
Date: Thu, 15 May 2025 21:05:08 -0700
From: Nicolin Chen <nicolinc@...dia.com>
To: "jgg@...dia.com" <jgg@...dia.com>, "Tian, Kevin" <kevin.tian@...el.com>
CC: "corbet@....net" <corbet@....net>, "will@...nel.org" <will@...nel.org>,
"bagasdotme@...il.com" <bagasdotme@...il.com>, "robin.murphy@....com"
<robin.murphy@....com>, "joro@...tes.org" <joro@...tes.org>,
"thierry.reding@...il.com" <thierry.reding@...il.com>, "vdumpa@...dia.com"
<vdumpa@...dia.com>, "jonathanh@...dia.com" <jonathanh@...dia.com>,
"shuah@...nel.org" <shuah@...nel.org>, "jsnitsel@...hat.com"
<jsnitsel@...hat.com>, "nathan@...nel.org" <nathan@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>, "Liu, Yi L"
<yi.l.liu@...el.com>, "mshavit@...gle.com" <mshavit@...gle.com>,
"praan@...gle.com" <praan@...gle.com>, "zhangzekun11@...wei.com"
<zhangzekun11@...wei.com>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-tegra@...r.kernel.org"
<linux-tegra@...r.kernel.org>, "linux-kselftest@...r.kernel.org"
<linux-kselftest@...r.kernel.org>, "patches@...ts.linux.dev"
<patches@...ts.linux.dev>, "mochs@...dia.com" <mochs@...dia.com>,
"alok.a.tiwari@...cle.com" <alok.a.tiwari@...cle.com>, "vasant.hegde@....com"
<vasant.hegde@....com>
Subject: Re: [PATCH v4 11/23] iommufd/viommu: Add IOMMUFD_CMD_HW_QUEUE_ALLOC
ioctl
On Fri, May 16, 2025 at 03:52:16AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@...dia.com>
> > Sent: Friday, May 16, 2025 11:17 AM
> >
> > On Fri, May 16, 2025 at 02:49:44AM +0000, Tian, Kevin wrote:
> > > > From: Nicolin Chen <nicolinc@...dia.com>
> > > > Sent: Friday, May 16, 2025 2:45 AM
> > > >
> > > > On Thu, May 15, 2025 at 06:30:27AM +0000, Tian, Kevin wrote:
> > > > > > From: Nicolin Chen <nicolinc@...dia.com>
> > > > > > Sent: Friday, May 9, 2025 11:03 AM
> > > > > >
> > > > > > +
> > > > > > +/**
> > > > > > + * struct iommu_hw_queue_alloc - ioctl(IOMMU_HW_QUEUE_ALLOC)
> > > > > > + * @size: sizeof(struct iommu_hw_queue_alloc)
> > > > > > + * @flags: Must be 0
> > > > > > + * @viommu_id: Virtual IOMMU ID to associate the HW queue with
> > > > > > + * @type: One of enum iommu_hw_queue_type
> > > > > > + * @index: The logical index to the HW queue per virtual IOMMU for
> > a
> > > > > > multi-queue
> > > > > > + * model
> > > > >
> > > > > I'm thinking of an alternative way w/o having the user to assign index
> > > > > and allowing the driver to poke object dependency (next patch).
> > > > >
> > > > > Let's say the index is internally assigned by the driver. so this cmd is
> > > > > just for allowing a hw queue and it's the driver to decide the allocation
> > > > > policy, e.g. in ascending order.
> > > > >
> > > > > Introduce a new flag in viommu_ops to indicate to core that the
> > > > > new hw queue should hold a reference to the previous hw queue.
> > > > >
> > > > > core maintains a last_queue field in viommu. Upon success return
> > > > > from @hw_queue_alloc() the core increments the users refcnt of
> > > > > last_queue, records the dependency in iommufd_hw_queue struct,
> > > > > and update viommu->last_queue.
> > > > >
> > > > > Then the destroy order is naturally guaranteed.
> > > >
> > > > I have thought about that too. It's nice that the core can easily
> > > > maintain the dependency for the driver.
> > > >
> > > > But there would still need an out_index to mark each dynamically
> > > > allocated queue. So VMM would know where it should map the queue.
> > > >
> > > > For example, if VMM wants to allocate a queue at its own index=1
> > > > without allocating index=0 first, kernel cannot fail that as VMM
> > > > doesn't provide the index. The only way left for kernel would be
> > > > to output the allocated queue with index=0 and then wish VMM can
> > > > validate it, which doesn't sound safe..
> > > >
> > >
> > > VMM's index is virtual which could be mapped to whatever queue
> > > object created at its own disposal.
> > >
> > > the uAPI just requires VMM to remember a sequential list of allocated
> > > queue objects and destroy them in reverse order of allocation, instead
> > > of in the reverse order of virtual indexes.
> >
> > But that's not going to work for VCMDQ.
> >
> > VINTF mmaps only a single page that controls multiple queues. And
> > all queues have to be mapped correctly between HW and VM indexes.
> > Otherwise, it won't work if VMM maps:
> >
> > HW-level VINTF1 LVCMDQ0 <==> VM-level VINTF0 LVCMDQ1
> > HW-level VINTF1 LVCMDQ1 <==> VM-level VINTF0 LVCMDQ0
> >
> > So, one way or another, kernel has to ensure the static mappings
> > of the indexes. And I think it's safer in the way that VMM tells
> > what index to allocate..
> >
>
> Okay, that's a valid point.
>
> But hey, we are already adding various restrictions to the uAPI
> about dependency, contiguity, etc. which the VMM should conform
> to. What hurts if we further say that the VMM should allocate
> virtual index in an ascending order along with hw queue allocation?
You mean adding another flag to manage the dependency in the core,
right?
I talked with Jason offline when adding that depend API. He didn't
want it to be in the core, saying that is a driver thing.
But that was before we added pin and contiguity, which he doesn't
really enjoy being in the core either.
So, yea, I think you have a point here..
@Jason?
Thanks
Nicolin
Powered by blists - more mailing lists