lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 2 May 2014 12:18:07 +0100
From:	Liviu Dudau <Liviu.Dudau@....com>
To:	Bjorn Helgaas <bhelgaas@...gle.com>
Cc:	James Bottomley <jbottomley@...allels.com>,
	"rdunlap@...radead.org" <rdunlap@...radead.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH] DMA-API: Change dma_declare_coherent_memory() CPU
 address to phys_addr_t

On Thu, May 01, 2014 at 08:05:04PM +0100, Bjorn Helgaas wrote:
> [+cc Arnd]
> 
> On Thu, May 1, 2014 at 8:08 AM, James Bottomley
> <jbottomley@...allels.com> wrote:
> > On Wed, 2014-04-30 at 14:33 -0600, Bjorn Helgaas wrote:
> >> dma_declare_coherent_memory() takes two addresses for a region of memory: a
> >> "bus_addr" and a "device_addr".  I think the intent is that "bus_addr" is
> >> the physical address a *CPU* would use to access the region, and
> >> "device_addr" is the bus address the *device* would use to address the
> >> region.
> >>
> >> Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
> >
> > Remind me what the difference between phys_addr_t and dma_addr_t are.
> >
> > I thought phys_addr_t was the maximum address the CPU could reach after
> > address translation and dma_addr_t was the maximum physical address any
> > bus attached memory mapped devices could appear at. (of course, mostly
> > they're the same).
> 
> I assumed phys_addr_t was for physical addresses generated by a CPU
> and dma_addr_t was for addresses generated by a device, e.g.,
> addresses you would see with a PCI bus analyzer or in a PCI BAR.  But
> ARCH_DMA_ADDR_T_64BIT does seem more connected to processor-specific
> things, e.g., ARM_LPAE, than to device bus properties like "support
> 64-bit PCI, not just 32-bit PCI."
> 
> DMA-API-HOWTO.txt contains this:
> 
>   ... the definition of the dma_addr_t (which can hold any valid DMA
>   address for the platform) type which should be used everywhere you
>   hold a DMA (bus) address returned from the DMA mapping functions.
> 
> and clearly you use a dma_addr_t from dma_alloc_coherent(), etc., to
> program the device.  It seems silly to have phys_addr_t and dma_addr_t
> for two (possibly slightly different) flavors of CPU physical
> addresses, and then sort of overload dma_addr_t by also using it to
> hold the addresses devices use for DMA.

dma_addr_t is a phys_addr_t masked according to the capabilities of the
*DMA engine* (not device). Quite a lot of PCI devices (all?) now contain
a DMA engine, so the distinction is somewhat blurred nowadays.

> 
> I think the "CPU generates phys_addr_t" and "device generates
> dma_addr_t" distinction seems like a useful idea.  

s/device/DMA engine inside device/

> I'm not a CPU
> person, so I don't understand the usefulness of dma_addr_t as a
> separate type to hold CPU addresses at which memory-mapped devices can
> appear.  If that's all it is, why not just use phys_addr_t everywhere
> and get rid of dma_addr_t altogether?

Because dma_addr_t imposes further restrictions based on what the DMA engine
can access. You could have a DMA engine that can only access 32bit addresses
in a 40+ bit CPU addresses system.

> 
> > The intent of dma_declare_coherent_memory() is to take a range of memory
> > provided by some device on the bus and allow the CPU to allocate regions
> > from it for use as things like descriptors.  The CPU treats it as real
> > memory, but, in fact, it is a mapped region of an attached device.

It is not the CPU that is the user of the bus_addr but the DMA engine. And
that can be on the bus as well, so it needs to generate bus addresses.

> >
> > If my definition is correct, then bus_addr should be dma_addr_t because
> > it has to be mapped from a device and dma_addr_t is the correct type for
> > device addresses.  If I've got the definition wrong, then we should
> > document it somewhere, because it's probably confusing other people as
> > well.
> 
> dma_declare_coherent_memory() calls ioremap() on bus_addr, which seems
> a clear indication that bus_addr is really a physical address usable
> by the CPU.  But I do see your point that bus_addr is a CPU address
> for a memory-mapped device, and would thus fit into your idea of a
> dma_addr_t.
> 
> I think this is the only part of the DMA API that deals with CPU
> physical addresses.  I suppose we could side-step this question by
> having the caller do the ioremap; then dma_declare_coherent_memory()
> could just take a CPU virtual address (a void *) and a device address
> (a dma_addr_t).

And how would that change the meaning of the function?

Best regards,
Liviu

> 
> But I'd still have the question of whether we're using dma_addr_t
> correctly in the PCI core.  We use it to hold BAR values and the like,
> and I've been making assumptions like "if dma_addr_t is only 32 bits,
> we can't handle BARs above 4GB."
> 
> Bjorn
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pci" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
====================
| I would like to |
| fix the world,  |
| but they're not |
| giving me the   |
 \ source code!  /
  ---------------
    ¯\_(ツ)_/¯

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ