[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1011c272-9527-9e61-4954-c7e27cd1fcb6@ti.com>
Date: Tue, 4 Feb 2020 11:34:35 +0200
From: Peter Ujfalusi <peter.ujfalusi@...com>
To: Christoph Hellwig <hch@....de>, <iommu@...ts.linux-foundation.org>
CC: <robin.murphy@....com>, <m.szyprowski@...sung.com>,
<linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] dma-direct: relax addressability checks in
dma_direct_supported
Hi Christoph,
On 03/02/2020 19.16, Christoph Hellwig wrote:
> dma_direct_supported tries to find the minimum addressable bitmask
> based on the end pfn and optional magic that architectures can use
> to communicate the size of the magic ZONE_DMA that can be used
> for bounce buffering. But between the DMA offsets that can change
> per device (or sometimes even region), the fact the ZONE_DMA isn't
> even guaranteed to be the lowest addresses and failure of having
> proper interfaces to the MM code this fails at least for one
> arm subarchitecture.
>
> As all the legacy DMA implementations have supported 32-bit DMA
> masks, and 32-bit masks are guranteed to always work by the API
> contract (using bounce buffers if needed), we can short cut the
> complicated check and always return true without breaking existing
> assumptions. Hopefully we can properly clean up the interaction
> with the arch defined zones and the bootmem allocator eventually.
>
> Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
> Reported-by: Peter Ujfalusi <peter.ujfalusi@...com>
> Signed-off-by: Christoph Hellwig <hch@....de>
> Tested-by: Peter Ujfalusi <peter.ujfalusi@...com>
Thank you for the proper patch, I can reaffirm my Tested-by.
We have also tested remoteproc on k2, which got broken as well.
Thanks again,
- Péter
> ---
> kernel/dma/direct.c | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 04f308a47fc3..efab894c1679 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -464,28 +464,26 @@ int dma_direct_mmap(struct device *dev, struct vm_area_struct *vma,
> }
> #endif /* CONFIG_MMU */
>
> -/*
> - * Because 32-bit DMA masks are so common we expect every architecture to be
> - * able to satisfy them - either by not supporting more physical memory, or by
> - * providing a ZONE_DMA32. If neither is the case, the architecture needs to
> - * use an IOMMU instead of the direct mapping.
> - */
> int dma_direct_supported(struct device *dev, u64 mask)
> {
> - u64 min_mask;
> -
> - if (IS_ENABLED(CONFIG_ZONE_DMA))
> - min_mask = DMA_BIT_MASK(zone_dma_bits);
> - else
> - min_mask = DMA_BIT_MASK(32);
> + u64 min_mask = (max_pfn - 1) << PAGE_SHIFT;
>
> - min_mask = min_t(u64, min_mask, (max_pfn - 1) << PAGE_SHIFT);
> + /*
> + * Because 32-bit DMA masks are so common we expect every architecture
> + * to be able to satisfy them - either by not supporting more physical
> + * memory, or by providing a ZONE_DMA32. If neither is the case, the
> + * architecture needs to use an IOMMU instead of the direct mapping.
> + */
> + if (mask >= DMA_BIT_MASK(32))
> + return 1;
>
> /*
> * This check needs to be against the actual bit mask value, so
> * use __phys_to_dma() here so that the SME encryption mask isn't
> * part of the check.
> */
> + if (IS_ENABLED(CONFIG_ZONE_DMA))
> + min_mask = min_t(u64, min_mask, DMA_BIT_MASK(zone_dma_bits));
> return mask >= __phys_to_dma(dev, min_mask);
> }
>
>
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
Powered by blists - more mailing lists