[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAErSpo7zjXg8xrV7=q36rNp_g573P93HiXTB72Z1cZfjGeeBmg@mail.gmail.com>
Date: Thu, 1 May 2014 13:05:04 -0600
From: Bjorn Helgaas <bhelgaas@...gle.com>
To: James Bottomley <jbottomley@...allels.com>
Cc: "rdunlap@...radead.org" <rdunlap@...radead.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Arnd Bergmann <arnd@...db.de>
Subject: Re: [PATCH] DMA-API: Change dma_declare_coherent_memory() CPU address
to phys_addr_t
[+cc Arnd]
On Thu, May 1, 2014 at 8:08 AM, James Bottomley
<jbottomley@...allels.com> wrote:
> On Wed, 2014-04-30 at 14:33 -0600, Bjorn Helgaas wrote:
>> dma_declare_coherent_memory() takes two addresses for a region of memory: a
>> "bus_addr" and a "device_addr". I think the intent is that "bus_addr" is
>> the physical address a *CPU* would use to access the region, and
>> "device_addr" is the bus address the *device* would use to address the
>> region.
>>
>> Rename "bus_addr" to "phys_addr" and change its type to phys_addr_t.
>
> Remind me what the difference between phys_addr_t and dma_addr_t are.
>
> I thought phys_addr_t was the maximum address the CPU could reach after
> address translation and dma_addr_t was the maximum physical address any
> bus attached memory mapped devices could appear at. (of course, mostly
> they're the same).
I assumed phys_addr_t was for physical addresses generated by a CPU
and dma_addr_t was for addresses generated by a device, e.g.,
addresses you would see with a PCI bus analyzer or in a PCI BAR. But
ARCH_DMA_ADDR_T_64BIT does seem more connected to processor-specific
things, e.g., ARM_LPAE, than to device bus properties like "support
64-bit PCI, not just 32-bit PCI."
DMA-API-HOWTO.txt contains this:
... the definition of the dma_addr_t (which can hold any valid DMA
address for the platform) type which should be used everywhere you
hold a DMA (bus) address returned from the DMA mapping functions.
and clearly you use a dma_addr_t from dma_alloc_coherent(), etc., to
program the device. It seems silly to have phys_addr_t and dma_addr_t
for two (possibly slightly different) flavors of CPU physical
addresses, and then sort of overload dma_addr_t by also using it to
hold the addresses devices use for DMA.
I think the "CPU generates phys_addr_t" and "device generates
dma_addr_t" distinction seems like a useful idea. I'm not a CPU
person, so I don't understand the usefulness of dma_addr_t as a
separate type to hold CPU addresses at which memory-mapped devices can
appear. If that's all it is, why not just use phys_addr_t everywhere
and get rid of dma_addr_t altogether?
> The intent of dma_declare_coherent_memory() is to take a range of memory
> provided by some device on the bus and allow the CPU to allocate regions
> from it for use as things like descriptors. The CPU treats it as real
> memory, but, in fact, it is a mapped region of an attached device.
>
> If my definition is correct, then bus_addr should be dma_addr_t because
> it has to be mapped from a device and dma_addr_t is the correct type for
> device addresses. If I've got the definition wrong, then we should
> document it somewhere, because it's probably confusing other people as
> well.
dma_declare_coherent_memory() calls ioremap() on bus_addr, which seems
a clear indication that bus_addr is really a physical address usable
by the CPU. But I do see your point that bus_addr is a CPU address
for a memory-mapped device, and would thus fit into your idea of a
dma_addr_t.
I think this is the only part of the DMA API that deals with CPU
physical addresses. I suppose we could side-step this question by
having the caller do the ioremap; then dma_declare_coherent_memory()
could just take a CPU virtual address (a void *) and a device address
(a dma_addr_t).
But I'd still have the question of whether we're using dma_addr_t
correctly in the PCI core. We use it to hold BAR values and the like,
and I've been making assumptions like "if dma_addr_t is only 32 bits,
we can't handle BARs above 4GB."
Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists