[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190409133157.GA10876@lst.de>
Date: Tue, 9 Apr 2019 15:31:58 +0200
From: "hch@....de" <hch@....de>
To: Thomas Hellstrom <thellstrom@...are.com>
Cc: "hch@....de" <hch@....de>,
"torvalds@...ux-foundation.org" <torvalds@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Deepak Singh Rawat <drawat@...are.com>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>
Subject: Re: revert dma direct internals abuse
On Tue, Apr 09, 2019 at 01:04:51PM +0000, Thomas Hellstrom wrote:
> On the VMware platform we have two possible vIOMMUS, the AMD iommu and
> Intel VTD, Given those conditions I belive the patch is functionally
> correct. We can't cover the AMD case with intel_iommu_enabled.
> Furthermore the only form of incoherency that can affect our graphics
> device is someone forcing SWIOTLB in which case that person would be
> happier with software rendering. In any case, observing the fact that
> the direct_ops are not used makes sure that SWIOTLB is not used.
> Knowing that we're on the VMware platform, we're coherent and can
> safely have the dma layer do dma address translation for us. All this
> information was not explicilty written in the changelog, no.
We have a series pending that might bounce your buffers even when
using the Intel IOMMU, which should eventually also find its way
to other IOMMUs:
https://lists.linuxfoundation.org/pipermail/iommu/2019-March/034090.html
> In any case, assuming that that patch is reverted due to the layering
> violation, Are you willing to help out with a small API to detect the
> situation where streaming DMA is incoherent?
The short but sad answer is that we can't ever guarantee that you
can skip the dma_*sync_* calls. There are too many factors in play
that might require it at any time - working around unaligned addresses
in iommus, CPUs that are coherent for some device and not some, addressing
limitations both in physical CPUs and VMs (see the various "secure VM"
concepts floating around at the moment).
If you want to avoid the dma_*sync_* calls you must use
dma_alloc_coherent to allocate the memory. Note that the memory for
dma_alloc_coherent actually comes from the normal page pool most of
the time, and for certain on x86, which seems to be what you care
about. The times of it dipping into the tiny swiotlb pool are long
gone. So at least for you I see absolutely no reason to not simply
always use dma_alloc_coherent to start with. For other uses that
involve platforms without DMA coherent devices like arm the tradeoffs
might be a little different.
Powered by blists - more mailing lists