[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMSpPPdnQdWeyTsnESRFx52gtxQLxwfPQwQgDFSN=katfW7suA@mail.gmail.com>
Date: Tue, 28 Mar 2017 10:57:39 +0530
From: Oza Oza <oza.oza@...adcom.com>
To: Rob Herring <robh@...nel.org>
Cc: Joerg Roedel <joro@...tes.org>,
Robin Murphy <robin.murphy@....com>,
Linux IOMMU <iommu@...ts.linux-foundation.org>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
"bcm-kernel-feedback-list@...adcom.com"
<bcm-kernel-feedback-list@...adcom.com>
Subject: Re: [RFC PATCH 1/3] of/pci: dma-ranges to account highest possible
host bridge dma_mask
On Mon, Mar 27, 2017 at 8:16 PM, Rob Herring <robh@...nel.org> wrote:
> On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep <oza.oza@...adcom.com> wrote:
>> it is possible that PCI device supports 64-bit DMA addressing,
>> and thus it's driver sets device's dma_mask to DMA_BIT_MASK(64),
>> however PCI host bridge may have limitations on the inbound
>> transaction addressing. As an example, consider NVME SSD device
>> connected to iproc-PCIe controller.
>>
>> Currently, the IOMMU DMA ops only considers PCI device dma_mask
>> when allocating an IOVA. This is particularly problematic on
>> ARM/ARM64 SOCs where the IOMMU (i.e. SMMU) translates IOVA to
>> PA for in-bound transactions only after PCI Host has forwarded
>> these transactions on SOC IO bus. This means on such ARM/ARM64
>> SOCs the IOVA of in-bound transactions has to honor the addressing
>> restrictions of the PCI Host.
>>
>> current pcie frmework and of framework integration assumes dma-ranges
>> in a way where memory-mapped devices define their dma-ranges.
>> dma-ranges: (child-bus-address, parent-bus-address, length).
>>
>> but iproc based SOCs and even Rcar based SOCs has PCI world dma-ranges.
>> dma-ranges = <0x43000000 0x00 0x00 0x00 0x00 0x80 0x00>;
>
> If you implement a common function, then I expect to see other users
> converted to use it. There's also PCI hosts in arch/powerpc that parse
> dma-ranges.
the common function should be similar to what
of_pci_get_host_bridge_resources is doing right now.
it parses ranges property right now.
the new function would look look following.
of_pci_get_dma_ranges(struct device_node *dev, struct list_head *resources)
where resources would return the dma-ranges.
but right now if you see the patch, of_dma_configure calls the new
function, which actually returns the largest possible size.
so this new function has to be generic in a way where other PCI hosts
can use it. but certainly iproc(Broadcom SOC) , rcar based SOCs can
use it for sure.
although having powerpc using it; is a separate exercise, since I do
not have any access to other PCI hosts such as powerpc. but we can
workout with them on thsi forum if required.
so overall, of_pci_get_dma_ranges has to serve following 2 purposes.
1) it has to return largest possible size to of_dma_configure to
generate largest possible dma_mask.
2) it also has to return resources (dma-ranges) parsed, to the users.
so to address above needs
of_pci_get_dma_ranges(struct device_node *dev, struct list_head
*resources, u64 *size)
dev -> device node.
resources -> dma-ranges in allocated list.
size -> highest possible size to generate possible dma_mask for
of_dma_configure.
let em know how this sounds.
Regards,
Oza.
Powered by blists - more mailing lists