[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1792a2eb-83e0-9849-082e-07a1602e25ed@arm.com>
Date: Tue, 19 Jan 2021 13:40:10 +0000
From: Robin Murphy <robin.murphy@....com>
To: Lianbo Jiang <lijiang@...hat.com>, linux-kernel@...r.kernel.org
Cc: joro@...tes.org, will@...nel.org, iommu@...ts.linux-foundation.org,
jsnitsel@...hat.com, thomas.lendacky@....com, bhe@...hat.com
Subject: Re: [PATCH 2/2 v2] iommu: use the __iommu_attach_device() directly
for deferred attach
On 2021-01-19 11:16, Lianbo Jiang wrote:
> Currently, because domain attach allows to be deferred from iommu
> driver to device driver, and when iommu initializes, the devices
> on the bus will be scanned and the default groups will be allocated.
>
> Due to the above changes, some devices could be added to the same
> group as below:
>
> [ 3.859417] pci 0000:01:00.0: Adding to iommu group 16
> [ 3.864572] pci 0000:01:00.1: Adding to iommu group 16
> [ 3.869738] pci 0000:02:00.0: Adding to iommu group 17
> [ 3.874892] pci 0000:02:00.1: Adding to iommu group 17
>
> But when attaching these devices, it doesn't allow that a group has
> more than one device, otherwise it will return an error. This conflicts
> with the deferred attaching. Unfortunately, it has two devices in the
> same group for my side, for example:
>
> [ 9.627014] iommu_group_device_count(): device name[0]:0000:01:00.0
> [ 9.633545] iommu_group_device_count(): device name[1]:0000:01:00.1
> ...
> [ 10.255609] iommu_group_device_count(): device name[0]:0000:02:00.0
> [ 10.262144] iommu_group_device_count(): device name[1]:0000:02:00.1
>
> Finally, which caused the failure of tg3 driver when tg3 driver calls
> the dma_alloc_coherent() to allocate coherent memory in the tg3_test_dma().
>
> [ 9.660310] tg3 0000:01:00.0: DMA engine test failed, aborting
> [ 9.754085] tg3: probe of 0000:01:00.0 failed with error -12
> [ 9.997512] tg3 0000:01:00.1: DMA engine test failed, aborting
> [ 10.043053] tg3: probe of 0000:01:00.1 failed with error -12
> [ 10.288905] tg3 0000:02:00.0: DMA engine test failed, aborting
> [ 10.334070] tg3: probe of 0000:02:00.0 failed with error -12
> [ 10.578303] tg3 0000:02:00.1: DMA engine test failed, aborting
> [ 10.622629] tg3: probe of 0000:02:00.1 failed with error -12
>
> In addition, the similar situations also occur in other drivers such
> as the bnxt_en driver. That can be reproduced easily in kdump kernel
> when SME is active
>
> Let's move the handling currently in iommu_dma_deferred_attach() into
> the iommu core code so that it can call the __iommu_attach_device()
> directly instead of the iommu_attach_device(). The external interface
> iommu_attach_device() is not suitable for handling this situation.
Reviewed-by: Robin Murphy <robin.murphy@....com>
> Signed-off-by: Lianbo Jiang <lijiang@...hat.com>
> ---
> drivers/iommu/dma-iommu.c | 18 +++---------------
> drivers/iommu/iommu.c | 12 ++++++++++++
> include/linux/iommu.h | 2 ++
> 3 files changed, 17 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 3711b4a6e4f9..fa6f9098e77d 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -380,18 +380,6 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
> return iova_reserve_iommu_regions(dev, domain);
> }
>
> -static int iommu_dma_deferred_attach(struct device *dev,
> - struct iommu_domain *domain)
> -{
> - const struct iommu_ops *ops = domain->ops;
> -
> - if (unlikely(ops->is_attach_deferred &&
> - ops->is_attach_deferred(domain, dev)))
> - return iommu_attach_device(domain, dev);
> -
> - return 0;
> -}
> -
> /**
> * dma_info_to_prot - Translate DMA API directions and attributes to IOMMU API
> * page flags.
> @@ -535,7 +523,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
> dma_addr_t iova;
>
> if (static_branch_unlikely(&__deferred_attach) &&
> - iommu_dma_deferred_attach(dev, domain))
> + iommu_do_deferred_attach(dev, domain))
> return DMA_MAPPING_ERROR;
>
> size = iova_align(iovad, size + iova_off);
> @@ -694,7 +682,7 @@ static void *iommu_dma_alloc_remap(struct device *dev, size_t size,
> *dma_handle = DMA_MAPPING_ERROR;
>
> if (static_branch_unlikely(&__deferred_attach) &&
> - iommu_dma_deferred_attach(dev, domain))
> + iommu_do_deferred_attach(dev, domain))
> return NULL;
>
> min_size = alloc_sizes & -alloc_sizes;
> @@ -1005,7 +993,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
> int i;
>
> if (static_branch_unlikely(&__deferred_attach) &&
> - iommu_dma_deferred_attach(dev, domain))
> + iommu_do_deferred_attach(dev, domain))
> return 0;
>
> if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index ffeebda8d6de..32164d355d2e 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1980,6 +1980,18 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
> }
> EXPORT_SYMBOL_GPL(iommu_attach_device);
>
> +int iommu_do_deferred_attach(struct device *dev,
> + struct iommu_domain *domain)
> +{
> + const struct iommu_ops *ops = domain->ops;
> +
> + if (unlikely(ops->is_attach_deferred &&
> + ops->is_attach_deferred(domain, dev)))
> + return __iommu_attach_device(domain, dev);
> +
> + return 0;
> +}
> +
> /*
> * Check flags and other user provided data for valid combinations. We also
> * make sure no reserved fields or unused flags are set. This is to ensure
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index b3f0e2018c62..c04bbd30aa1d 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -424,6 +424,8 @@ extern struct iommu_group *iommu_group_get_by_id(int id);
> extern void iommu_domain_free(struct iommu_domain *domain);
> extern int iommu_attach_device(struct iommu_domain *domain,
> struct device *dev);
> +extern int iommu_do_deferred_attach(struct device *dev,
> + struct iommu_domain *domain);
> extern void iommu_detach_device(struct iommu_domain *domain,
> struct device *dev);
> extern int iommu_uapi_cache_invalidate(struct iommu_domain *domain,
>
Powered by blists - more mailing lists