[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2cdd4142-98e5-14de-2f34-264244f24d01@redhat.com>
Date: Wed, 10 Apr 2019 15:02:23 +0200
From: Auger Eric <eric.auger@...hat.com>
To: Vincent Stehlé <vincent.stehle@....com>
Cc: Alex Williamson <alex.williamson@...hat.com>,
eric.auger.pro@...il.com, iommu@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
kvmarm@...ts.cs.columbia.edu, joro@...tes.org,
jacob.jun.pan@...ux.intel.com, yi.l.liu@...ux.intel.com,
jean-philippe.brucker@....com, will.deacon@....com,
robin.murphy@....com, kevin.tian@...el.com, ashok.raj@...el.com,
marc.zyngier@....com, christoffer.dall@....com,
peter.maydell@...aro.org
Subject: Re: [PATCH v6 09/22] vfio: VFIO_IOMMU_BIND/UNBIND_MSI
Hi Vincent,
On 4/10/19 2:35 PM, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
>
> Hi Eric,
>
> I am not sure this assumption always hold.
>
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
>
> +------------+ +------------+
> | Endpoint A | | Endpoint B |
> +------------+ +------------+
> v v
> /---------\
> | Non-ACS |
> | Switch |
> \---------/
> v
> +---------------+
> | PCIe |
> | Root Complex |
> +---------------+
> v
> +-----------+
> | SMMU |
> +-----------+
> v
> +--------------------------+
> | System interconnect |
> +--------------------------+
> v v
> +-----------+ +-----------+
> | ITS A | | ITS B |
> +-----------+ +-----------+
>
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
>
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
>
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
>
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].
Thank you for the review & links.
I understand the above topology is perfectly valid. Now the question is:
is it sufficiently common to care about it?
At the moment VFIO/vIOMMU assignment of devices belonging to the same
group isn't upstream yet. Work is ongoing by Alex to support it. It uses
a PCIe-to-PCI bridge on guest side and it looks this topology is not
supported by the SMMUv3 driver. Then comes the trouble of using several
ITS in nested mode.
If this topology is sufficiently rare I propose we to do not support it
in this VFIO/vIOMMU use case. in v7 I introduced a check that aims to
verify devices attached to the same nested iommu_domain share the same
msi_domain.
Thanks
Eric
>
> Best regards,
> Vincent.
>
> [1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
> [2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
> [3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt
>
Powered by blists - more mailing lists