[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191008115103.GA463127@rani.riverdale.lan>
Date: Tue, 8 Oct 2019 07:51:03 -0400
From: Arvind Sankar <nivedita@...m.mit.edu>
To: Christoph Hellwig <hch@....de>
Cc: Arvind Sankar <nivedita@...m.mit.edu>,
Christoph Hellwig <hch@...radead.org>,
linux-kernel@...r.kernel.org
Subject: Re: ehci-pci breakage with dma-mapping changes in 5.4-rc2
On Tue, Oct 08, 2019 at 09:32:10AM +0200, Christoph Hellwig wrote:
> On Mon, Oct 07, 2019 at 07:54:02PM -0400, Arvind Sankar wrote:
> > > Do you want me to resend the patch as its own mail, or do you just take
> > > it with a Tested-by: from me? If the former, I assume you're ok with me
> > > adding your Signed-off-by?
> > >
> > > Thanks
> >
> > A question on the original change though -- what happens if a single
> > device (or a single IOMMU domain really) does want >4G DMA address
> > space? Was that not previously allowed either?
>
> Your EHCI device actually supports the larger addressing. Without an
> IOMMU (or with accidentally enabled passthrough mode as in your report)
> that will use bounce buffers for physical address that are too large.
> With an iommu we can just remap, and by default those remap addresses
> are under 32-bit just to make everyones life easier.
>
> The dma_get_required_mask function is misnamed unfortunately, what it
> really means is the optimal mask, that is one that avoids bounce
> buffering or other complications.
I understand that my EHCI device, even though it only supports 32-bit
adddressing, will be able to DMA into anywhere in physical RAM, whether
below 4G or not, via the IOMMU or bounce buffering.
What I mean is, do there exist devices (which would necessarily support
64-bit DMA) that want to DMA using bigger than 4Gb buffers. Eg a GPU
accelerator card with 16Gb of RAM on-board that wants to map 6Gb for DMA
in one go, or 5 accelerator cards that are in one IOMMU domain and want
to simultaneously map 1Gb each.
Powered by blists - more mailing lists