[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190819231925.GW253360@google.com>
Date: Mon, 19 Aug 2019 18:19:25 -0500
From: Bjorn Helgaas <helgaas@...nel.org>
To: Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@...ux.intel.com>
Cc: linux-pci@...r.kernel.org, linux-kernel@...r.kernel.org,
ashok.raj@...el.com, keith.busch@...el.com,
Joerg Roedel <joro@...tes.org>,
David Woodhouse <dwmw2@...radead.org>,
iommu@...ts.linux-foundation.org
Subject: Re: [PATCH v5 4/7] PCI/ATS: Add PRI support for PCIe VF devices
On Mon, Aug 19, 2019 at 03:53:31PM -0700, Kuppuswamy Sathyanarayanan wrote:
> On Mon, Aug 19, 2019 at 09:15:00AM -0500, Bjorn Helgaas wrote:
> > On Thu, Aug 15, 2019 at 03:39:03PM -0700, Kuppuswamy Sathyanarayanan wrote:
> > > On 8/15/19 3:20 PM, Bjorn Helgaas wrote:
> > > > [+cc Joerg, David, iommu list: because IOMMU drivers are the only
> > > > callers of pci_enable_pri() and pci_enable_pasid()]
> > > >
> > > > On Thu, Aug 01, 2019 at 05:06:01PM -0700, sathyanarayanan.kuppuswamy@...ux.intel.com wrote:
> > > > > From: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@...ux.intel.com>
> > > > >
> > > > > When IOMMU tries to enable Page Request Interface (PRI) for VF device
> > > > > in iommu_enable_dev_iotlb(), it always fails because PRI support for
> > > > > PCIe VF device is currently broken. Current implementation expects
> > > > > the given PCIe device (PF & VF) to implement PRI capability before
> > > > > enabling the PRI support. But this assumption is incorrect. As per PCIe
> > > > > spec r4.0, sec 9.3.7.11, all VFs associated with PF can only use the
> > > > > PRI of the PF and not implement it. Hence we need to create exception
> > > > > for handling the PRI support for PCIe VF device.
> > > > >
> > > > > Also, since PRI is a shared resource between PF/VF, following rules
> > > > > should apply.
> > > > >
> > > > > 1. Use proper locking before accessing/modifying PF resources in VF
> > > > > PRI enable/disable call.
> > > > > 2. Use reference count logic to track the usage of PRI resource.
> > > > > 3. Disable PRI only if the PRI reference count (pri_ref_cnt) is zero.
> >
> > > > Wait, why do we need this at all? I agree the spec says VFs may not
> > > > implement PRI or PASID capabilities and that VFs use the PRI and
> > > > PASID of the PF.
> > > >
> > > > But why do we need to support pci_enable_pri() and pci_enable_pasid()
> > > > for VFs? There's nothing interesting we can *do* in the VF, and
> > > > passing it off to the PF adds all this locking mess. For VFs, can we
> > > > just make them do nothing or return -EINVAL? What functionality would
> > > > we be missing if we did that?
> > >
> > > Currently PRI/PASID capabilities are not enabled by default. IOMMU can
> > > enable PRI/PASID for VF first (and not enable it for PF). In this case,
> > > doing nothing for VF device will break the functionality.
> >
> > What is the path where we can enable PRI/PASID for VF but not for the
> > PF? The call chains leading to pci_enable_pri() go through the
> > iommu_ops.add_device interface, which makes me think this is part of
> > the device enumeration done by the PCI core, and in that case I would
> > think this it should be done for the PF before VFs. But maybe this
> > path isn't exercised until a driver does a DMA map or something
> > similar?
> AFAIK, this path will only get exercised when the device does DMA and
> hence there is no specific order in which PRI/PASID is enabled in PF/VF.
> In fact, my v2 version of this patch set had a check to ensure PF
> PRI/PASID enable is happened before VF attempts PRI/PASID
> enable/disable. But I had to remove it in later version of this series
> due to failure case reported by one the tester of this code.
What's the path? And does that path make sense?
I got this far before giving up:
iommu_go_to_state # AMD
state_next
amd_iommu_init_pci
amd_iommu_init_api
bus_set_iommu
iommu_bus_init
bus_for_each_dev(..., add_iommu_group)
add_iommu_group
iommu_probe_device
amd_iommu_add_device # amd_iommu_ops.add_device
init_iommu_group
iommu_group_get_for_dev
iommu_group_add_device
__iommu_attach_device
amd_iommu_attach_device # amd_iommu_ops.attach_dev
attach_device # amd_iommu
pdev_iommuv2_enable
pci_enable_pri
iommu_probe_device
intel_iommu_add_device # intel_iommu_ops.add_device
domain_add_dev_info
dmar_insert_one_dev_info
domain_context_mapping
domain_context_mapping_one
iommu_enable_dev_iotlb
pci_enable_pri
These *look* like enumeration paths, not DMA setup paths. But I could
be wrong, since I gave up before getting to the source.
I don't want to add all this complexity because we *think* we need it.
I want to think about whether it makes *sense*. Maybe it's sensible
for the PF enumeration or a PF driver to enable the hardware it owns.
If we leave it to the VFs, then we have issues with coordinating
between VFs that want different settings, etc.
If we understand the whole picture and it needs to be in the VFs,
that's fine. But I don't think we understand the whole picture yet.
Bjorn
Powered by blists - more mailing lists