[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5FC3163CFD30C246ABAA99954A238FA83866B53C@FRAEML521-MBX.china.huawei.com>
Date: Thu, 8 Mar 2018 09:35:15 +0000
From: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@...wei.com>
To: Robin Murphy <robin.murphy@....com>,
Alex Williamson <alex.williamson@...hat.com>
CC: Auger Eric <eric.auger@...hat.com>,
"pmorel@...ux.vnet.ibm.com" <pmorel@...ux.vnet.ibm.com>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Linuxarm <linuxarm@...wei.com>,
John Garry <john.garry@...wei.com>,
"xuwei (O)" <xuwei5@...wei.com>
Subject: RE: [PATCH v4 4/6] vfio/type1: check dma map request is within a
valid iova range
> -----Original Message-----
> From: Shameerali Kolothum Thodi
> Sent: Monday, March 05, 2018 11:44 AM
> To: 'Robin Murphy' <robin.murphy@....com>; Alex Williamson
> <alex.williamson@...hat.com>
> Cc: Auger Eric <eric.auger@...hat.com>; pmorel@...ux.vnet.ibm.com;
> kvm@...r.kernel.org; linux-kernel@...r.kernel.org; Linuxarm
> <linuxarm@...wei.com>; John Garry <john.garry@...wei.com>; xuwei (O)
> <xuwei5@...wei.com>
> Subject: RE: [PATCH v4 4/6] vfio/type1: check dma map request is within a valid
> iova range
[...]
> > >>> I still think you're overstretching the IOMMU reserved region interface
> > >>> by trying to report possible ACS conflicts. Are we going to update the
> > >>> reserved list on device hotplug? Are we going to update the list when
> > >>> MMIO is enabled or disabled for each device? What if the BARs are
> > >>> reprogrammed or bridge apertures changed? IMO, the IOMMU
> reserved
> > list
> > >>> should account for specific IOVA exclusions that apply to transactions
> > >>> that actually reach the IOMMU, not aliasing that might occur in the
> > >>> downstream topology. Additionally, the IOMMU group composition must
> > be
> > >>> such that these sorts of aliasing issues can only occur within an IOMMU
> > >>> group. If a transaction can be affected by the MMIO programming of
> > >>> another group, then the groups are drawn incorrectly for the isolation
> > >>> capabilities of the hardware. Thanks,
> > >>
> > >> Agree that this is not a perfect thing to do covering all scenarios. As Robin
> > >> pointed out, the current code is playing safe by reserving the full windows.
> > >>
> > >> My suggestion will be to limit this reservation to non-ACS cases only. This
> > will
> > >> make sure that ACS ARM hardware is not broken by this series and nos-
> ACS
> > >> ones has a chance to run once Qemu is updated to take care of the
> reserved
> > >> regions (at least in some user scenarios).
> > >>
> > >> If you all are fine with this, I can include the ACS check I mentioned earlier
> > in
> > >> iommu_dma_get_resv_regions() and sent out the revised series.
> > >>
> > >> Please let me know your thoughts.
> > >
> > > IMO, the IOMMU should be concerned with ACS as far as isolation is
> > > concerned and reporting reserved ranges that are imposed at the IOMMU
> > > and leave any aliasing or routing issues in the downstream topology to
> > > other layers, or perhaps to the user. Unfortunately, enforcing the
> > > iova list in vfio is gated by some movement here since we can't break
> > > existing users. Thanks,
> >
> > FWIW, given the discussion we've had here I wouldn't object to pushing
> > the PCI window reservation back into the DMA-specific path, such that it
> > doesn't get exposed via the general IOMMU API interface. We *do* want to
> > do it there where we are in total control of our own address space and
> > it just avoids a whole class of problems (even with ACS I'm not sure the
> > root complex can be guaranteed to send a "bad" IOVA out to the SMMU
> > instead of just terminating it for looking like a peer-to-peer attempt).
> >
> > I do agree that it's not scalable for the IOMMU layer to attempt to
> > detect and describe upstream PCI limitations to userspace by itself -
> > they are "reserved regions" rather than "may or may not work regions"
> > after all. If there is a genuine integration issue between an IOMMU and
> > an upstream PCI RC which restricts usable addresses on a given system,
> > that probably needs to be explicitly communicated from firmware to the
> > IOMMU driver anyway, at which point that driver can report the relevant
> > region(s) directly from its own callback.
> >
> > I suppose there's an in-between option of keeping generic window
> > reservations but giving them a new "only reserve this if you're being
> > super-cautious or don't know better" region type which we hide from
> > userspace and ignore in VFIO, but maybe that leaves the lines a but too
> > blurred still.
>
> Thanks for your reply and details. I have made an attempt to revert the PCI
> window reservation back into the DMA path. Could you please take a look
> at the below changes and let me know.
> (This is on top of HW MSI reserve patches which is now part of linux-next)
Hi Robin,
Please take a look at the below changes if you get sometime and let me know.
Not sure this is what you have in mind.
Thanks,
Shameer
>
> -->8--
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index f05f3cf..ddcbbdb 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -167,40 +167,16 @@ EXPORT_SYMBOL(iommu_put_dma_cookie);
> * @list: Reserved region list from iommu_get_resv_regions()
> *
> * IOMMU drivers can use this to implement their .get_resv_regions callback
> - * for general non-IOMMU-specific reservations. Currently, this covers host
> - * bridge windows for PCI devices and GICv3 ITS region reservation on ACPI
> - * based ARM platforms that may require HW MSI reservation.
> + * for general non-IOMMU-specific reservations. Currently, this covers GICv3
> + * ITS region reservation on ACPI based ARM platforms that may require HW
> MSI
> + * reservation.
> */
> void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
> {
> - struct pci_host_bridge *bridge;
> - struct resource_entry *window;
> -
> - if (!is_of_node(dev->iommu_fwspec->iommu_fwnode) &&
> - iort_iommu_msi_get_resv_regions(dev, list) < 0)
> - return;
> -
> - if (!dev_is_pci(dev))
> - return;
> -
> - bridge = pci_find_host_bridge(to_pci_dev(dev)->bus);
> - resource_list_for_each_entry(window, &bridge->windows) {
> - struct iommu_resv_region *region;
> - phys_addr_t start;
> - size_t length;
> -
> - if (resource_type(window->res) != IORESOURCE_MEM)
> - continue;
>
> - start = window->res->start - window->offset;
> - length = window->res->end - window->res->start + 1;
> - region = iommu_alloc_resv_region(start, length, 0,
> - IOMMU_RESV_RESERVED);
> - if (!region)
> - return;
> + if (!is_of_node(dev->iommu_fwspec->iommu_fwnode))
> + iort_iommu_msi_get_resv_regions(dev, list);
>
> - list_add_tail(®ion->list, list);
> - }
> }
> EXPORT_SYMBOL(iommu_dma_get_resv_regions);
>
> @@ -229,6 +205,23 @@ static int cookie_init_hw_msi_region(struct
> iommu_dma_cookie *cookie,
> return 0;
> }
>
> +static void iova_reserve_pci_windows(struct pci_dev *dev,
> + struct iova_domain *iovad)
> +{
> + struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus);
> + struct resource_entry *window;
> + unsigned long lo, hi;
> +
> + resource_list_for_each_entry(window, &bridge->windows) {
> + if (resource_type(window->res) != IORESOURCE_MEM)
> + continue;
> +
> + lo = iova_pfn(iovad, window->res->start - window->offset);
> + hi = iova_pfn(iovad, window->res->end - window->offset);
> + reserve_iova(iovad, lo, hi);
> + }
> +}
> +
> static int iova_reserve_iommu_regions(struct device *dev,
> struct iommu_domain *domain)
> {
> @@ -238,6 +231,9 @@ static int iova_reserve_iommu_regions(struct device
> *dev,
> LIST_HEAD(resv_regions);
> int ret = 0;
>
> + if (dev_is_pci(dev))
> + iova_reserve_pci_windows(to_pci_dev(dev), iovad);
> +
> iommu_get_resv_regions(dev, &resv_regions);
> list_for_each_entry(region, &resv_regions, list) {
> unsigned long lo, hi;
>
>
Powered by blists - more mailing lists