[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200714110141.GD16178@lst.de>
Date: Tue, 14 Jul 2020 13:01:41 +0200
From: Christoph Hellwig <hch@....de>
To: Robin Murphy <robin.murphy@....com>
Cc: Claire Chang <tientzu@...omium.org>, robh+dt@...nel.org,
frowand.list@...il.com, hch@....de, m.szyprowski@...sung.com,
treding@...dia.com, gregkh@...uxfoundation.org,
saravanak@...gle.com, suzuki.poulose@....com,
dan.j.williams@...el.com, heikki.krogerus@...ux.intel.com,
bgolaszewski@...libre.com, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org, iommu@...ts.linux-foundation.org,
tfiga@...omium.org, drinkcat@...omium.org
Subject: Re: [PATCH 1/4] dma-mapping: Add bounced DMA ops
On Mon, Jul 13, 2020 at 12:55:43PM +0100, Robin Murphy wrote:
> On 2020-07-13 10:12, Claire Chang wrote:
>> The bounced DMA ops provide an implementation of DMA ops that bounce
>> streaming DMA in and out of a specially allocated region. Only the
>> operations relevant to streaming DMA are supported.
>
> I think there are too many implicit assumptions here - apparently that
> coherent allocations will always be intercepted by
> dma_*_from_dev_coherent(), and that calling into dma-direct won't actually
> bounce things a second time beyond where you thought they were going,
> manage coherency for a different address, and make it all go subtly wrong.
> Consider "swiotlb=force", for instance...
>
> Again, plumbing this straight into dma-direct so that SWIOTLB can simply
> target a different buffer and always bounce regardless of masks would seem
> a far better option.
I haven't really had time to read through the details, but I agree that
any bouncing scheme should reuse the swiotlb code and not invent a
parallel infrastructure.
Powered by blists - more mailing lists