lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 May 2014 06:11:42 +0000
From:	James Bottomley <jbottomley@...allels.com>
To:	"bhelgaas@...gle.com" <bhelgaas@...gle.com>
CC:	"rdunlap@...radead.org" <rdunlap@...radead.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"arnd@...db.de" <arnd@...db.de>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] DMA-API: Change dma_declare_coherent_memory() CPU
 address to phys_addr_t

On Thu, 2014-05-01 at 13:05 -0600, Bjorn Helgaas wrote:
> [+cc Arnd]
> 
> On Thu, May 1, 2014 at 8:08 AM, James Bottomley
> <jbottomley@...allels.com> wrote:
> > On Wed, 2014-04-30 at 14:33 -0600, Bjorn Helgaas wrote:
> >> dma_declare_coherent_memory() takes two addresses for a region of memory: a
> >> "bus_addr" and a "device_addr".  I think the intent is that "bus_addr" is
> >> the physical address a *CPU* would use to access the region, and
> >> "device_addr" is the bus address the *device* would use to address the
> >> region.
> >>
> >> Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
> >
> > Remind me what the difference between phys_addr_t and dma_addr_t are.
> >
> > I thought phys_addr_t was the maximum address the CPU could reach after
> > address translation and dma_addr_t was the maximum physical address any
> > bus attached memory mapped devices could appear at. (of course, mostly
> > they're the same).
> 
> I assumed phys_addr_t was for physical addresses generated by a CPU
> and dma_addr_t was for addresses generated by a device, e.g.,
> addresses you would see with a PCI bus analyzer or in a PCI BAR.  But
> ARCH_DMA_ADDR_T_64BIT does seem more connected to processor-specific
> things, e.g., ARM_LPAE, than to device bus properties like "support
> 64-bit PCI, not just 32-bit PCI."

OK, but even in your definition dma_declare_coherent_memory() should
operate on dma_addr_t because the input is a region of memory you got
from the device.  Somehow you generate and idea from the device
configuration of where this piece of memory sits on the device and
that's what you feed into dma_declare_coherent_memory().  It maps that
region and makes it available to the CPU allocators.

> 
> DMA-API-HOWTO.txt contains this:
> 
>   ... the definition of the dma_addr_t (which can hold any valid DMA
>   address for the platform) type which should be used everywhere you
>   hold a DMA (bus) address returned from the DMA mapping functions.
> 
> and clearly you use a dma_addr_t from dma_alloc_coherent(), etc., to
> program the device.  It seems silly to have phys_addr_t and dma_addr_t
> for two (possibly slightly different) flavors of CPU physical
> addresses, and then sort of overload dma_addr_t by also using it to
> hold the addresses devices use for DMA.
> 
> I think the "CPU generates phys_addr_t" and "device generates
> dma_addr_t" distinction seems like a useful idea.  I'm not a CPU
> person, so I don't understand the usefulness of dma_addr_t as a
> separate type to hold CPU addresses at which memory-mapped devices can
> appear.  If that's all it is, why not just use phys_addr_t everywhere
> and get rid of dma_addr_t altogether?

Actually, on this one, I think I agree ... we really just need the idea
of physical and virtual.  The original idea of dma_addr_t is that the
type would yell if you tried to use it where you could use a virtual
address and the users were supposed to treat it as an opaque cookie for
a region on the device, but I suppose it's generating more confusion
than solving problems.

> > The intent of dma_declare_coherent_memory() is to take a range of memory
> > provided by some device on the bus and allow the CPU to allocate regions
> > from it for use as things like descriptors.  The CPU treats it as real
> > memory, but, in fact, it is a mapped region of an attached device.
> >
> > If my definition is correct, then bus_addr should be dma_addr_t because
> > it has to be mapped from a device and dma_addr_t is the correct type for
> > device addresses.  If I've got the definition wrong, then we should
> > document it somewhere, because it's probably confusing other people as
> > well.
> 
> dma_declare_coherent_memory() calls ioremap() on bus_addr, which seems
> a clear indication that bus_addr is really a physical address usable
> by the CPU.  But I do see your point that bus_addr is a CPU address
> for a memory-mapped device, and would thus fit into your idea of a
> dma_addr_t.

Well, yes, but it's declaring that the device has a region at bus_addr
(on the device) we want to make available to the CPU to use.  On the
Q720 it's about a megabyte of sram behind a bridge.  The CPU puts script
fragments into it and the 720 scripts engines execute them.  The reason
for it being on the device behind a bridge is that fetching from CPU
main memory causes too much bus traffic.  I think all the other uses are
something similar.  The point, though, is that the address for the
memory comes from the device.  It has to be ioremapped because the
system doesn't know about it and we're going to allocate from it for
buffers mapped into the CPU virtual space.

> I think this is the only part of the DMA API that deals with CPU
> physical addresses.  I suppose we could side-step this question by
> having the caller do the ioremap; then dma_declare_coherent_memory()
> could just take a CPU virtual address (a void *) and a device address
> (a dma_addr_t).

But we have to do the ioremap.  Without that there's no mapping for the
memory region we've just declared, so I think it does make sense to do
it within the dma_declare_coherent_memory() API.

> But I'd still have the question of whether we're using dma_addr_t
> correctly in the PCI core.  We use it to hold BAR values and the like,
> and I've been making assumptions like "if dma_addr_t is only 32 bits,
> we can't handle BARs above 4GB."

Well, first we need to answer whether we need separate phys_addr_t and
dma_addr_t ... we know the former represents physical memory addressed
by the CPU and the latter bus physical addresses from devices ... but
they're essentially the same.

James

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists