[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231201203536.GG1489931@ziepe.ca>
Date: Fri, 1 Dec 2023 16:35:36 -0400
From: Jason Gunthorpe <jgg@...pe.ca>
To: Lu Baolu <baolu.lu@...ux.intel.com>
Cc: Joerg Roedel <joro@...tes.org>, Will Deacon <will@...nel.org>,
Robin Murphy <robin.murphy@....com>,
Kevin Tian <kevin.tian@...el.com>,
Jean-Philippe Brucker <jean-philippe@...aro.org>,
Nicolin Chen <nicolinc@...dia.com>,
Yi Liu <yi.l.liu@...el.com>,
Jacob Pan <jacob.jun.pan@...ux.intel.com>,
Yan Zhao <yan.y.zhao@...el.com>, iommu@...ts.linux.dev,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v7 12/12] iommu: Improve iopf_queue_flush_dev()
On Wed, Nov 15, 2023 at 11:02:26AM +0800, Lu Baolu wrote:
> The iopf_queue_flush_dev() is called by the iommu driver before releasing
> a PASID. It ensures that all pending faults for this PASID have been
> handled or cancelled, and won't hit the address space that reuses this
> PASID. The driver must make sure that no new fault is added to the queue.
This needs more explanation, why should anyone care?
More importantly, why is *discarding* the right thing to do?
Especially why would we discard a partial page request group?
After we change a translation we may have PRI requests in a
queue. They need to be acknowledged, not discarded. The DMA in the
device should be restarted and the device should observe the new
translation - if it is blocking then it should take a DMA error.
More broadly, we should just let things run their normal course. The
domain to deliver the fault to should be determined very early. If we
get a fault and there is no fault domain currently assigned then just
restart it.
The main reason to fence would be to allow the domain to become freed
as the faults should be holding pointers to it. But I feel there are
simpler options for that then this..
> The SMMUv3 driver doesn't use it because it only implements the
> Arm-specific stall fault model where DMA transactions are held in the SMMU
> while waiting for the OS to handle iopf's. Since a device driver must
> complete all DMA transactions before detaching domain, there are no
> pending iopf's with the stall model. PRI support requires adding a call to
> iopf_queue_flush_dev() after flushing the hardware page fault queue.
This explanation doesn't make much sense, from a device driver
perspective both PRI and stall cause the device to not complete DMAs.
The difference between stall and PRI is fairly small, stall causes an
internal bus to lock up while PRI does not.
> -int iopf_queue_flush_dev(struct device *dev)
> +int iopf_queue_discard_dev_pasid(struct device *dev, ioasid_t pasid)
> {
> struct iommu_fault_param *iopf_param = iopf_get_dev_fault_param(dev);
> + const struct iommu_ops *ops = dev_iommu_ops(dev);
> + struct iommu_page_response resp;
> + struct iopf_fault *iopf, *next;
> + int ret = 0;
>
> if (!iopf_param)
> return -ENODEV;
>
> flush_workqueue(iopf_param->queue->wq);
> +
A naked flush_workqueue like this is really suspicious, it needs a
comment explaining why the queue can't get more work queued at this
point.
I suppose the driver is expected to stop calling
iommu_report_device_fault() before calling this function, but that
doesn't seem like it is going to be possible. Drivers should be
implementing atomic replace for the PASID updates and in that case
there is no momement when it can say the HW will stop generating PRI.
I'm looking at this code after these patches are applied and it still
seems quite bonkers to me :(
Why do we allocate two copies of the memory on all fault paths?
Why do we have fault->type still that only has one value?
What is serializing iommu_get_domain_for_dev_pasid() in the fault
path? It looks sort of like the plan is to use iopf_param->lock and
ensure domain removal grabs that lock at least after the xarray is
changed - but does that actually happen?
I would suggest, broadly, a flow for iommu_report_device_fault() sort
of:
1) Allocate memory for the evt. Every path except errors needs this,
so just do it
2) iopf_get_dev_fault_param() should not have locks in it! This is
fast path now. Use a refcount, atomic compare exchange to allocate,
and RCU free.
3) Everything runs under the fault_param->lock
4) Check if !IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE, set it aside and then
exit! This logic is really tortured and confusing
5) Allocate memory and assemble the group
6) Obtain the domain for this group and incr a per-domain counter that a
fault is pending on that domain
7) Put the *group* into the WQ. Put the *group* on a list in fault_param
instead of the individual faults
8) Don't linear search a linked list in iommu_page_response()! Pass
the group in that we got from the WQ that we *know* is still
active. Ack that passed group.
When freeing a domain wait for the per-domain counter to go to
zero. This ensures that the WQ is flushed out and all the outside
domain references are gone.
When wanting to turn off PRI make sure a non-PRI domain is
attached to everything. Fence against the HW's event queue. No new
iommu_report_device_fault() is possible.
Lock the fault_param->lock and go through every pending group and
respond it. Mark the group memory as invalid so iommu_page_response()
NOP's it. Unlock, fence the HW against queued responses, and turn off
PRI.
An *optimization* would be to lightly flush the domain when changing
the translation. Lock the fault_param->lock and look for groups in the
list with old_domain. Do the same as for PRI-off: respond to the
group, mark it as NOP. The WQ may still be chewing on something so the
domain free still has to check and wait.
Did I get it right??
Jason
Powered by blists - more mailing lists