[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <BN9PR11MB527600A5B8DC271075936A918CE12@BN9PR11MB5276.namprd11.prod.outlook.com>
Date: Wed, 22 Jan 2025 09:33:35 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: Nicolin Chen <nicolinc@...dia.com>, Jason Gunthorpe <jgg@...dia.com>
CC: "corbet@....net" <corbet@....net>, "will@...nel.org" <will@...nel.org>,
"joro@...tes.org" <joro@...tes.org>, "suravee.suthikulpanit@....com"
<suravee.suthikulpanit@....com>, "robin.murphy@....com"
<robin.murphy@....com>, "dwmw2@...radead.org" <dwmw2@...radead.org>,
"baolu.lu@...ux.intel.com" <baolu.lu@...ux.intel.com>, "shuah@...nel.org"
<shuah@...nel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>, "linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>, "linux-kselftest@...r.kernel.org"
<linux-kselftest@...r.kernel.org>, "linux-doc@...r.kernel.org"
<linux-doc@...r.kernel.org>, "eric.auger@...hat.com" <eric.auger@...hat.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>, "mdf@...nel.org"
<mdf@...nel.org>, "mshavit@...gle.com" <mshavit@...gle.com>,
"shameerali.kolothum.thodi@...wei.com"
<shameerali.kolothum.thodi@...wei.com>, "smostafa@...gle.com"
<smostafa@...gle.com>, "ddutile@...hat.com" <ddutile@...hat.com>, "Liu, Yi L"
<yi.l.liu@...el.com>, "patches@...ts.linux.dev" <patches@...ts.linux.dev>
Subject: RE: [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event
helper
> From: Nicolin Chen <nicolinc@...dia.com>
> Sent: Wednesday, January 22, 2025 3:16 PM
>
> On Tue, Jan 21, 2025 at 08:21:28PM -0400, Jason Gunthorpe wrote:
> > On Tue, Jan 21, 2025 at 01:40:05PM -0800, Nicolin Chen wrote:
> > > > There is also the minor detail of what happens if the hypervisor HW
> > > > queue overflows - I don't know the answer here. It is security
> > > > concerning since the VM can spam DMA errors at high rate. :|
> > >
> > > In my view, the hypervisor queue is the vHW queue for the VM, so
> > > it should act like a HW, which means it's up to the guest kernel
> > > driver that handles the high rate DMA errors..
> >
> > I'm mainly wondering what happens if the single physical kernel
> > event queue overflows because it is DOS'd by a VM and the hypervisor
> > cannot drain it fast enough?
> >
> > I haven't looked closely but is there some kind of rate limiting or
> > otherwise to mitigate DOS attacks on the shared event queue from VMs?
>
> SMMUv3 reads the event out of the physical kernel event queue,
> and adds that to faultq or veventq or prints it out. So, it'd
> not overflow because of DOS? And all other drivers should do
> the same?
>
"add that to faultq or eventq" could take time or the irqthread
could be preempted for various reasons then there is always an
window within which an overflow condition could occur due to
the smmu driver incapable of fetching pending events timely.
On VT-d the driver could disable reporting non-recoverable fault
for a given device via a control bit in the PASID entry, but I didn't
see a similar knob for PRQ.
and the overflow situation on intel-iommu driver is higher than
arm. The irqthread reads head/tail once, batch-reports the
events in-between, and then updates the head register...
Powered by blists - more mailing lists