lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1399383725.2237.26.camel@dabdike.int.hansenpartnership.com>
Date:	Tue, 6 May 2014 13:42:06 +0000
From:	James Bottomley <jbottomley@...allels.com>
To:	"bhelgaas@...gle.com" <bhelgaas@...gle.com>
CC:	"rdunlap@...radead.org" <rdunlap@...radead.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"arnd@...db.de" <arnd@...db.de>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] DMA-API: Change dma_declare_coherent_memory() CPU
 address to phys_addr_t

On Tue, 2014-05-06 at 07:18 -0600, Bjorn Helgaas wrote:
> On Mon, May 5, 2014 at 8:42 PM, James Bottomley
> <jbottomley@...allels.com> wrote:
> > On Mon, 2014-05-05 at 17:01 -0600, Bjorn Helgaas wrote:
> >> On Fri, May 02, 2014 at 10:42:18AM +0200, Arnd Bergmann wrote:
> >
> >> > I don't know about NCR_Q720, but all others are only used on machines
> >> > where physical addresses and bus addresses are in the same space.
> >>
> >> In general, the driver doesn't know whether physical and bus addresses
> >> are in the same space.  At least, I *hope* it doesn't have to know,
> >> because it can't be very generic if it does.
> >
> > The API was designed for the case where the memory resides on a PCI
> > device (the Q720 case), the card config gives us a bus address, but if
> > the system has an IOMMU, we'd have to do a dma_map of the entire region
> > to set up the IOMMU before we can touch it.  The address it gets back
> > from the dma_map (the dma_addr_t handle for the IOMMU mapping) is what
> > we pass into dma_declare_coherent_memory().
> 
> The IOMMU (if any) is only involved for DMA to system memory, and
> there is no system memory in this picture.  The device does DMA to its
> own memory; no dma_map is required for this.  We use
> dma_declare_coherent_memory() to set things up so the CPU can also do
> programmed I/O to the memory.

Right, but for the CPU to access memory on a device, the access has to
go through the IOMMU: it has to be programmed to map the memory on the
bus to a physical address.

> > The reason it does an
> > ioremap is because this IOMMU mapped address is now physical to the CPU
> > and we want to make the region available to virtual space.  Essentially
> > the memory the allocator hands out behaves as proper virtual memory but
> > it's backed by physical memory on the card behind the PCI bridge.
> 
> Yep, the programmed I/O depends on the ioremap().  But I don't think
> it depends on any IOMMU mapping.

At least on Parisc with U2/Uturn, unless there are IOMMU entries, you
won't be able to address the memory on the device because it's behind
the IOMMU.  For regions like this, the necessary IOMMU entries are set
up at init time because without them you don't get memory mapped access
to register space either.

So like I said, I can go either way.  The entries arrive properly set
up, so perhaps we should use phys_addr_t because that's what's the other
side of the IOMMU.  On the other hand it is a handle for something on
the device, so perhaps it should be dma_addr_t.

James

> > I'm still not that fussed about the difference between phys_addr_t and
> > dma_addr_t, but if the cookie returned from a dma_map is a dma_addr_t
> > then that's what dma_declare_coherent_memory() should use.  If it's a
> > phys_addr_t, then likewise.
> >
> > James
> >


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ