lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 14 Jul 2016 06:03:44 +0000
From:	Bharat Kumar Gogada <bharat.kumar.gogada@...inx.com>
To:	Lorenzo Pieralisi <lorenzo.pieralisi@....com>
CC:	Arnd Bergmann <arnd@...db.de>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	"Liviu.Dudau@....com" <Liviu.Dudau@....com>,
	nofooter <nofooter@...inx.com>,
	"thomas.petazzoni@...e-electrons.com" 
	<thomas.petazzoni@...e-electrons.com>
Subject: RE: Purpose of pci_remap_iospace

> Subject: Re: Purpose of pci_remap_iospace
>
> On Wed, Jul 13, 2016 at 12:30:44PM +0000, Bharat Kumar Gogada wrote:
>
> [...]
>
> > err = of_pci_get_host_bridge_resources(node, 0, 0xff, &res, &iobase);
> >         if (err) {
> >                 pr_err("Getting bridge resources failed\n");
> >                 return err;
> >         }
> > resource_list_for_each_entry(window, &res) {            //code for io resource
> >                 struct resource *res = window->res;
> >                 u64 restype = resource_type(res);
> >
> >                 switch (restype) {
> >                 case IORESOURCE_IO:
> >                         err = pci_remap_iospace(res, iobase);
> >                         if(err)
> >                                 pr_info("FAILED TO IPREMAP RESOURCE\n");
> >                         break;
> >                 default:
> >                         dev_err(pcie->dev, "invalid resource %pR\n",
> > res);
> >
> >                 }
> >         }
> >
> > Other than above code I haven't done any change in driver.
> >
> Here is your PCI bridge mem space window assignment. I do not see an IO
> window assignment which makes me think that IO cycles and relative IO
> window is not enabled through the bridge, that's the reason you can't assign
> IO space to the endpoint, because it has no parent IO window enabled IIUC.
>

We sorted this out, enabled the IO base limit / upper 16bit registers in the bridge for 32 bit decode.
However my IO address being assigned to EP is different than what I provide in device tree.

Device tree property:
ranges = <0x01000000 0x00000000 0x00000000 0x00000000 0xe0000000 0 0x00010000   //io
                      0x02000000 0x00000000 0xe0100000 0x00000000 0xe0100000 0 0x0ef00000>; //non prefetchabe memory

Here is the boot log:
[    2.312504] nwl-pcie fd0e0000.pcie: Link is UP
[    2.312548] PCI host bridge /amba/pcie@...e0000 ranges:
[    2.312565]   No bus range found for /amba/pcie@...e0000, using [bus 00-ff]
[    2.312591]    IO 0xe0000000..0xe000ffff -> 0x00000000
[    2.312610]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
[    2.312711] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
[    2.312729] pci_bus 0000:00: root bus resource [bus 00-ff]
[    2.312745] pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
[    2.312761] pci_bus 0000:00: root bus resource [mem 0xe0100000-0xeeffffff]
[    2.312993] pci 0000:00:00.0: cannot attach to SMMU, is it on the same bus?
[    2.313009] iommu: Adding device 0000:00:00.0 to group 1
[    2.313363] pci 0000:01:00.0: cannot attach to SMMU, is it on the same bus?
[    2.313379] iommu: Adding device 0000:01:00.0 to group 1
[    2.313434] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-0xe02fffff]
[    2.313452] pci 0000:00:00.0: BAR 7: assigned [io  0x1000-0x1fff]
[    2.313469] pci 0000:00:00.0: BAR 6: assigned [mem 0xe0300000-0xe03007ff pref]
[    2.313495] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-0xe01fffff 64bit]
[    2.313529] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-0xe02fffff 64bit]
[    2.313561] pci 0000:01:00.0: BAR 4: assigned [io  0x1000-0x103f]
[    2.313581] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
[    2.313597] pci 0000:00:00.0:   bridge window [io  0x1000-0x1fff]
[    2.313614] pci 0000:00:00.0:   bridge window [mem 0xe0100000-0xe02fffff]

If we are mapping our IO space to 0xe0000000 and 64k size, why kernel is showing 0x1000-0x1fff which is 4k ?

Lspci of bridge :
00:00.0 PCI bridge: Xilinx Corporation Device a024 (prog-if 00 [Normal decode])
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Bus: primary=00, secondary=01, subordinate=0c, sec-latency=0
        I/O behind bridge: 00001000-00001fff
        Memory behind bridge: e0100000-e02fffff

Lspci ofEP:
01:00.0 Memory controller: Xilinx Corporation Device d024
        Subsystem: Xilinx Corporation Device 0007
        Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 224
        Region 0: Memory at e0100000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 2: Memory at e0200000 (64-bit, non-prefetchable) [disabled] [size=1M]
        Region 4: I/O ports at 1000 [disabled] [size=64]

I'm yet to try with the other API's you have pointed out (devm_request_resource()).

Thanks & Regards,
Bharat


This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ