[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191009134807.GA2091030@rani.riverdale.lan>
Date: Wed, 9 Oct 2019 09:48:07 -0400
From: Arvind Sankar <nivedita@...m.mit.edu>
To: Christoph Hellwig <hch@....de>
Cc: Arvind Sankar <nivedita@...m.mit.edu>,
Christoph Hellwig <hch@...radead.org>,
linux-kernel@...r.kernel.org
Subject: Re: ehci-pci breakage with dma-mapping changes in 5.4-rc2
On Wed, Oct 09, 2019 at 08:50:43AM +0200, Christoph Hellwig wrote:
> On Tue, Oct 08, 2019 at 11:47:31AM -0400, Arvind Sankar wrote:
> > Ok, I see that almost nothing actually uses dma_get_required_mask. So if
> > something did need >4Gb space, the IOMMU would allocate it anyway
> > regardless of dma_get_required_mask.
>
> Yes. And with the direct mapping it also isn't an issue.
>
> > I'm thinking this means that we actually only needed to change
> > dma_get_required_mask to dma_direct_get_required_mask in
> > iommu_need_mapping, and the rest of the change is unnecessary?
> >
> > Below is list of stuff calling dma_get_required_mask currently:
>
> I guess that would actually work ok, but I prefer the more verbose
> version as it explain what is going on, and will lead people to do
> the right thing if we split the iommu vs passthrough case into
> different ops (we already had a patch for that out on the list).
>
> > For the drivers that do currently use dma_get_required_mask, I believe
> > we will need to replace most of them with dma_direct_get_required_mask
> > as well to maintain passthrough functionality -- the fusion, scsi, and
> > infinband drivers all seem to be using this call to determine whether to
> > expose the device's 64-bit DMA capability or not. With the change they
> > will think they only need 32-bit DMA, which in turn will disable
> > passthrough on them.
>
> At least for some of the legacy SCSI drivers that is intentional, and
> the reason why dma_get_required_mask was originally added. On actual
> PCI (and PCI-X, but not PCIe which everyone uses now) the 64-bit
> addressing even if supported is not very efficient as it needs two
> bus cycles. So we prefer to just use the iommu.
>
> > The etnaviv driver is doing something funky that I'm not sure about, but
> > I *think* that one might want the real physical range as well. The mmc
> > driver I think might be ok with the 32-bit range.
>
> etnaviv is never used on systems with the intel iommu anyway.
>
> > The vmd controller one not sure about.
>
> That just passes through the dma ops to work around really stupid
> intel chipsets.
>
> > In dma-mapping.h, the function is used in dma_addressing_limited, which
> > in turn is used in a couple of amd drm drivers, again not sure about
> > these.
> ---end quoted text---
Thanks for the detailed explanation!
That means your changes actually improve the situation for those scsi
drivers -- previously they would have been using 64-bit addressing if
physical RAM needed it, regardless of IOMMU availability/use, right?
Powered by blists - more mailing lists