lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201401301516.27091.arnd@arndb.de>
Date:	Thu, 30 Jan 2014 15:16:26 +0100
From:	Arnd Bergmann <arnd@...db.de>
To:	Tanmay Inamdar <tinamdar@....com>
Cc:	Bjorn Helgaas <bhelgaas@...gle.com>,
	Jason Gunthorpe <jgunthorpe@...idianresearch.com>,
	Grant Likely <grant.likely@...aro.org>,
	Rob Herring <robh+dt@...nel.org>,
	Catalin Marinas <catalin.marinas@....com>,
	Rob Landley <rob@...dley.net>, linux-pci@...r.kernel.org,
	devicetree@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
	linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org,
	patches@....com, jcm@...hat.com
Subject: Re: [RFC PATCH V3 1/4] pci: APM X-Gene PCIe controller driver

On Friday 24 January 2014, Tanmay Inamdar wrote:

> +static void xgene_pcie_fixup_bridge(struct pci_dev *dev)
> +{
> +	int i;
> +
> +	/* Hide the PCI host BARs from the kernel as their content doesn't
> +	 * fit well in the resource management
> +	 */
> +	for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
> +		dev->resource[i].start = dev->resource[i].end = 0;
> +		dev->resource[i].flags = 0;
> +	}
> +	dev_info(&dev->dev, "Hiding X-Gene pci host bridge resources %s\n",
> +		 pci_name(dev));
> +}
> +DECLARE_PCI_FIXUP_HEADER(XGENE_PCIE_VENDORID, XGENE_PCIE_DEVICEID,
> +			 xgene_pcie_fixup_bridge);

Shouldn't this be gone now that the host bridge is correctly shown
at the domain root?

> +static int xgene_pcie_setup(int nr, struct pci_sys_data *sys)
> +{
> +	struct xgene_pcie_port *pp = sys->private_data;
> +	struct resource *io = &pp->realio;
> +
> +	io->start = sys->domain * SZ_64K;
> +	io->end = io->start + SZ_64K;
> +	io->flags = pp->io.res.flags;
> +	io->name = "PCI IO";
> +	pci_ioremap_io(io->start, pp->io.res.start);
> +
> +	pci_add_resource_offset(&sys->resources, io, sys->io_offset);
> +	sys->mem_offset = pp->mem.res.start - pp->mem.pci_addr;
> +	pci_add_resource_offset(&sys->resources, &pp->mem.res,
> +				sys->mem_offset);
> +	return 1;
> +}

Thanks for bringing back the I/O space handling.

You don't seem to set sys->io_offset anywhere, but each of the
ports listed in your DT starts a local bus I/O register range
at port 0.

AFAICT, you need to add (somewhere)

	sys->io_offset = pp->realio.start - pp->io.pci_addr;

but there could be something else missing. You clearly haven't
tested if the I/O space actually works.

If you want to try out the I/O space, I'd suggest using an Intel
e1000 network card, which has both memory and i/o space. There
is a patch at http://www.spinics.net/lists/linux-pci/msg27684.html
that lets you check the I/O registers on it, or you can go
through /dev/port from user space.

I also haven't seen your patch that adds pci_ioremap_io() for
arm64. It would be helpful to keep it in the same patch
series, since it won't build without this patch.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ