lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8520D5D51A55D047800579B094147198258B8B85@XAP-PVEXMBX01.xlnx.xilinx.com>
Date:	Wed, 13 Jul 2016 15:16:21 +0000
From:	Bharat Kumar Gogada <bharat.kumar.gogada@...inx.com>
To:	Arnd Bergmann <arnd@...db.de>
CC:	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Bjorn Helgaas <bhelgaas@...gle.com>,
	"Liviu.Dudau@....com" <Liviu.Dudau@....com>,
	nofooter <nofooter@...inx.com>,
	"thomas.petazzoni@...e-electrons.com" 
	<thomas.petazzoni@...e-electrons.com>
Subject: RE: Purpose of pci_remap_iospace

> Subject: Re: Purpose of pci_remap_iospace
>
> On Wednesday, July 13, 2016 12:30:44 PM CEST Bharat Kumar Gogada wrote:
> >  > On Wednesday, July 13, 2016 8:11:56 AM CEST Bharat Kumar Gogada
> wrote:
> > > > > Subject: Re: Purpose of pci_remap_iospace
> > >
> > > I notice you have 1MB of I/O space here
> > >
> > > > Kernel Boot log:
> > > > [    2.345294] nwl-pcie fd0e0000.pcie: Link is UP
> > > > [    2.345339] PCI host bridge /amba/pcie@...e0000 ranges:
> > > > [    2.345356]   No bus range found for /amba/pcie@...e0000, using
> [bus
> > > 00-ff]
> > > > [    2.345382]    IO 0xe0000000..0xe00fffff -> 0x00000000
> > > > [    2.345401]   MEM 0xe0100000..0xeeffffff -> 0xe0100000
> > > > [    2.345498] nwl-pcie fd0e0000.pcie: PCI host bridge to bus 0000:00
> > > > [    2.345517] pci_bus 0000:00: root bus resource [bus 00-ff]
> > > > [    2.345533] pci_bus 0000:00: root bus resource [io  0x0000-0xfffff]
> > >
> > > and all of it gets mapped by the PCI core. Usually you only have 64K
> > > of I/O space per host bridge, and the PCI core should perhaps not
> > > try to map all of it, though I don't think this is actually your problem here.
> > >
> > > > [    2.345550] pci_bus 0000:00: root bus resource [mem 0xe0100000-
> > > 0xeeffffff]
> > > > [    2.345770] pci 0000:00:00.0: cannot attach to SMMU, is it on the same
> > > bus?
> > > > [    2.345786] iommu: Adding device 0000:00:00.0 to group 1
> > > > [    2.346142] pci 0000:01:00.0: cannot attach to SMMU, is it on the same
> > > bus?
> > > > [    2.346158] iommu: Adding device 0000:01:00.0 to group 1
> > > > [    2.346213] pci 0000:00:00.0: BAR 8: assigned [mem 0xe0100000-
> > > 0xe02fffff]
> > > > [    2.346234] pci 0000:01:00.0: BAR 0: assigned [mem 0xe0100000-
> 0xe01fffff
> > > 64bit]
> > > > [    2.346268] pci 0000:01:00.0: BAR 2: assigned [mem 0xe0200000-
> 0xe02fffff
> > > 64bit]
> > > > [    2.346300] pci 0000:01:00.0: BAR 4: no space for [io  size 0x0040]
> > > > [    2.346316] pci 0000:01:00.0: BAR 4: failed to assign [io  size 0x0040]
> > > > [    2.346333] pci 0000:00:00.0: PCI bridge to [bus 01-0c]
> > > > [    2.346350] pci 0000:00:00.0:   bridge window [mem 0xe0100000-
> > > 0xe02fffff]
> > > >
> > > > IO assignment fails.
> > >
> > > I would guess that the I/O space is not registered correctly. Is
> > > this drivers/pci/host/pcie-xilinx.c ? We have had problems with this
> > > in the past, since almost nobody uses I/O space and it requires
> > > several steps to all be done correctly.
> > >
> > Thanks Arnd.
> >
> > we are testing using drivers/pci/host/pcie-xilinx-nwl.c.
>
> According to Documentation/devicetree/bindings/pci/xilinx-nwl-pcie.txt,
> this hardware does not support I/O space.

We received a newer IP version with IO support, so we are trying to test this feature.
>
> Is this on ARM or microblaze?

It is ARM 64-bit.

> This has neither the PCI memory nor the I/O resource, it looks like you never
> call pci_add_resource_offset() to start with, or maybe it fails for some
> reason.

I see that above API is used in ARM drivers, do we need to do it in ARM64 also ?

Regards,
Bharat



This email and any attachments are intended for the sole use of the named recipient(s) and contain(s) confidential information that may be proprietary, privileged or copyrighted under applicable law. If you are not the intended recipient, do not read, copy, or forward this email message or any attachments. Delete this email message and any attachments immediately.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ