[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240712130037.GA14050@ziepe.ca>
Date: Fri, 12 Jul 2024 10:00:37 -0300
From: Jason Gunthorpe <jgg@...pe.ca>
To: Nicolin Chen <nicolinc@...dia.com>
Cc: Lu Baolu <baolu.lu@...ux.intel.com>, Kevin Tian <kevin.tian@...el.com>,
Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Yi Liu <yi.l.liu@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Joel Granados <j.granados@...sung.com>, iommu@...ts.linux.dev,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v8 06/10] iommufd: Add iommufd fault object
On Tue, Jul 09, 2024 at 10:33:42AM -0700, Nicolin Chen wrote:
> > We are potentially talking about 5-10 physical smmus and 2-3 FDs per
> > physical? Does that scare anyone?
>
> I think we can share the same FD by adding a viommu_id somewhere
> to indicate what the data/event belongs to. Yet, it seemed that
> you had a counter-argument that a shared FD (queue) might have a
> security concern as well?
> https://lore.kernel.org/linux-iommu/20240522232833.GH20229@nvidia.com/
That was for the physical HW queue not so much the FD.
We need to be mindful that these events can't DOS the hypervisor, that
constrains how we track pending events in the kernel, not how they get
marshaled to FDs to deliver to user space.
Thinking more, it makes sense that a FD would tie 1:1 with a queue in
the VM.
That way backpressure on a queue will not cause head of line blocking
to other queues because they multiplex onto a single FD.
Jason
Powered by blists - more mailing lists