[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dc581c5b-11de-f4b3-e928-208b9293e391@arm.com>
Date: Fri, 1 May 2020 16:54:17 +0100
From: Robin Murphy <robin.murphy@....com>
To: Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Kishon Vijay Abraham I <kishon@...com>
Cc: Tom Joseph <tjoseph@...ence.com>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Rob Herring <robh+dt@...nel.org>,
Andrew Murray <amurray@...goodpenguin.co.uk>,
linux-pci@...r.kernel.org, devicetree@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/4] PCI: cadence: Use "dma-ranges" instead of
"cdns,no-bar-match-nbits" property
On 2020-05-01 3:46 pm, Lorenzo Pieralisi wrote:
> [+Robin - to check on dma-ranges intepretation]
>
> I would need RobH and Robin to review this.
>
> Also, An ACK from Tom is required - for the whole series.
>
> On Fri, Apr 17, 2020 at 05:13:20PM +0530, Kishon Vijay Abraham I wrote:
>> Cadence PCIe core driver (host mode) uses "cdns,no-bar-match-nbits"
>> property to configure the number of bits passed through from PCIe
>> address to internal address in Inbound Address Translation register.
>>
>> However standard PCI dt-binding already defines "dma-ranges" to
>> describe the address range accessible by PCIe controller. Parse
>> "dma-ranges" property to configure the number of bits passed
>> through from PCIe address to internal address in Inbound Address
>> Translation register.
>>
>> Signed-off-by: Kishon Vijay Abraham I <kishon@...com>
>> ---
>> drivers/pci/controller/cadence/pcie-cadence-host.c | 13 +++++++++++--
>> 1 file changed, 11 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c
>> index 9b1c3966414b..60f912a657b9 100644
>> --- a/drivers/pci/controller/cadence/pcie-cadence-host.c
>> +++ b/drivers/pci/controller/cadence/pcie-cadence-host.c
>> @@ -206,8 +206,10 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>> struct device *dev = rc->pcie.dev;
>> struct platform_device *pdev = to_platform_device(dev);
>> struct device_node *np = dev->of_node;
>> + struct of_pci_range_parser parser;
>> struct pci_host_bridge *bridge;
>> struct list_head resources;
>> + struct of_pci_range range;
>> struct cdns_pcie *pcie;
>> struct resource *res;
>> int ret;
>> @@ -222,8 +224,15 @@ int cdns_pcie_host_setup(struct cdns_pcie_rc *rc)
>> rc->max_regions = 32;
>> of_property_read_u32(np, "cdns,max-outbound-regions", &rc->max_regions);
>>
>> - rc->no_bar_nbits = 32;
>> - of_property_read_u32(np, "cdns,no-bar-match-nbits", &rc->no_bar_nbits);
>> + if (!of_pci_dma_range_parser_init(&parser, np))
>> + if (of_pci_range_parser_one(&parser, &range))
>> + rc->no_bar_nbits = ilog2(range.size);
You probably want "range.pci_addr + range.size" here just in case the
bottom of the window is ever non-zero. Is there definitely only ever a
single inbound window to consider?
I believe that pci_parse_request_of_pci_ranges() could do the actual
parsing for you, but I suppose plumbing that in plus processing the
resulting dma_ranges resource probably ends up a bit messier than the
concise open-coding here.
Robin.
>> +
>> + if (!rc->no_bar_nbits) {
>> + rc->no_bar_nbits = 32;
>> + of_property_read_u32(np, "cdns,no-bar-match-nbits",
>> + &rc->no_bar_nbits);
>> + }
>>
>> rc->vendor_id = 0xffff;
>> of_property_read_u16(np, "vendor-id", &rc->vendor_id);
>> --
>> 2.17.1
>>
Powered by blists - more mailing lists