lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201402212253.01101.arnd@arndb.de>
Date:	Fri, 21 Feb 2014 22:53:00 +0100
From:	Arnd Bergmann <arnd@...db.de>
To:	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc:	Alistair Popple <alistair@...ple.id.au>,
	devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
	linuxppc-dev@...ts.ozlabs.org
Subject: Re: [PATCH 7/7] powerpc: Added PCI MSI support using the HSTA module

On Friday 21 February 2014, Benjamin Herrenschmidt wrote:
> On Fri, 2014-02-21 at 15:33 +0100, Arnd Bergmann wrote:
> 
> > > @@ -242,8 +264,10 @@
> > >  			ranges = <0x02000000 0x00000000 0x80000000 0x00000110 0x80000000 0x0 0x80000000
> > >  			          0x01000000 0x0        0x0        0x00000140 0x0        0x0 0x00010000>;
> > >  
> > > -			/* Inbound starting at 0 to memsize filled in by zImage */
> > > -			dma-ranges = <0x42000000 0x0 0x0 0x0 0x0 0x0 0x0>;
> > > +			/* Inbound starting at 0x0 to 0x40000000000. In order to use MSI
> > > +			 * PCI devices must be able to write to the HSTA module.
> > > +			 */
> > > +			dma-ranges = <0x42000000 0x0 0x0 0x0 0x0 0x400 0x0>;
> 
> Should we (provided it's possible in HW) create two ranges instead ? One
> covering RAM and one covering MSIs ? To avoid stray DMAs whacking random
> HW registers in the chip ...
> 
> > >  			/* This drives busses 0 to 0xf */
> > >  			bus-range = <0x0 0xf>;
> > 
> > Ah, I first only saw the line you are removing and was about
> > to suggest what you do anyway. Great!
> > 
> > > diff --git a/arch/powerpc/sysdev/ppc4xx_pci.c b/arch/powerpc/sysdev/ppc4xx_pci.c
> > > index 54ec1d5..7cc3acc 100644
> > > --- a/arch/powerpc/sysdev/ppc4xx_pci.c
> > > +++ b/arch/powerpc/sysdev/ppc4xx_pci.c
> > > @@ -176,8 +176,12 @@ static int __init ppc4xx_parse_dma_ranges(struct pci_controller *hose,
> > >  		return -ENXIO;
> > >  	}
> > >  
> > > -	/* Check that we are fully contained within 32 bits space */
> > > -	if (res->end > 0xffffffff) {
> > > +	/* Check that we are fully contained within 32 bits space if we are not
> > > +	 * running on a 460sx or 476fpe which have 64 bit bus addresses.
> > > +	 */
> > > +	if (res->end > 0xffffffff &&
> > > +	    !(of_device_is_compatible(hose->dn, "ibm,plb-pciex-460sx")
> > > +	      || of_device_is_compatible(hose->dn, "ibm,plb-pciex-476fpe"))) {
> > >  		printk(KERN_ERR "%s: dma-ranges outside of 32 bits space\n",
> > >  		       hose->dn->full_name);
> > >  		return -ENXIO;
> > 
> > A more general question for BenH: Apparently this PCI implementation is
> > getting reused on arm64 for APM X-Gene. Do you see any value in trying to
> > share host controller drivers like this one across powerpc and arm64?
> 
> I would start duplicating, and see how much common code remains... Then
> eventually merge.

Ok.

> > It's possible we are going to see the same situation with fsl_pci in the
> > future, if arm and powerpc qoriq chips use the same peripherals. My
> > plan for arm64 right now is to make PCI work without any code in arch/,
> > just using new helper functions in drivers/pci and sticking the host
> > drivers into drivers/pci/host as we started doing for arm32, but it
> > can require significant work to make those drivers compatible with
> > the powerpc pci-common.c.
> 
> powerpc pci-common.c is shrinking :-) At least the address remapping is
> all in the core now, we could move more over I suppose...

Ah, good. We're currently trying to work out a generic way to parse
the DT and ioremap the I/O windows. That could probably be shared
and while I hope what we need on arm64 is compatible with what you
need on powerpc, I may always miss something. I'll make sure to add
you to the discussions.

Some parts are easier because we assume that we always scan the
entire PCI bus ourselves and don't do PCI_PROBE_DEVTREE. 
Other parts are harder because for the generic case we actually
want to support loading and unloading host bridge drivers,
as well as supporting any combination of host bridges in the
same system without platform specific code.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ