[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6228807.4v9ZRtib77@wuerfel>
Date: Tue, 08 Apr 2014 12:11:07 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Liviu Dudau <Liviu.Dudau@....com>
Cc: Bjorn Helgaas <bhelgaas@...gle.com>,
linux-pci <linux-pci@...r.kernel.org>,
Catalin Marinas <Catalin.Marinas@....com>,
Will Deacon <Will.Deacon@....com>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
linaro-kernel <linaro-kernel@...ts.linaro.org>,
LKML <linux-kernel@...r.kernel.org>,
"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
LAKML <linux-arm-kernel@...ts.infradead.org>,
Tanmay Inamdar <tinamdar@....com>,
Grant Likely <grant.likely@...retlab.ca>
Subject: Re: [PATCH v7 1/6] pci: Introduce pci_register_io_range() helper function.
On Tuesday 08 April 2014 10:49:33 Liviu Dudau wrote:
> On Tue, Apr 08, 2014 at 12:21:51AM +0100, Bjorn Helgaas wrote:
> > On Fri, Mar 14, 2014 at 9:34 AM, Liviu Dudau <Liviu.Dudau@....com> wrote:
> > > Some architectures do not share x86 simple view of the PCI I/O space
> > > and instead use a range of addresses that map to bus addresses. For
> > > some architectures these ranges will be expressed by OF bindings
> > > in a device tree file.
> > >
> > > Introduce a pci_register_io_range() helper function that can be used
> > > by the architecture code to keep track of the I/O ranges described by the
> > > PCI bindings. If the PCI_IOBASE macro is not defined that signals
> > > lack of support for PCI and we return an error.
> > >
> > > Signed-off-by: Liviu Dudau <Liviu.Dudau@....com>
> > > Acked-by: Grant Likely <grant.likely@...aro.org>
> > > Tested-by: Tanmay Inamdar <tinamdar@....com>
> > > ---
> > > drivers/of/address.c | 9 +++++++++
> > > include/linux/of_address.h | 1 +
> > > 2 files changed, 10 insertions(+)
> > >
> > > diff --git a/drivers/of/address.c b/drivers/of/address.c
> > > index 1a54f1f..be958ed 100644
> > > --- a/drivers/of/address.c
> > > +++ b/drivers/of/address.c
> > > @@ -619,6 +619,15 @@ const __be32 *of_get_address(struct device_node *dev, int index, u64 *size,
> > > }
> > > EXPORT_SYMBOL(of_get_address);
> > >
> > > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
> > > +{
> > > +#ifndef PCI_IOBASE
> > > + return -EINVAL;
> > > +#else
> > > + return 0;
> > > +#endif
> > > +}
> >
> > This isn't PCI code, so I'm fine with it in that sense, but I'm not
> > sure the idea of a PCI_IOBASE #define is really what we need. It's
> > not really determined by the processor architecture, it's determined
> > by the platform. And a single address isn't enough in general,
> > either, because if there are multiple host bridges, there's no reason
> > the apertures that generate PCI I/O transactions need to be contiguous
> > on the CPU side.
>
> It should not be only platform's choice if the architecture doesn't support it.
> To my mind PCI_IOBASE means "I support MMIO operations and this is the
> start of the virtual address where my I/O ranges are mapped." It's the
> same as ppc's _IO_BASE. And pci_address_to_pio() will take care to give
> you the correct io_offset in the presence of multiple host bridges,
> while keeping the io resource in the range [0 .. host_bridge_io_range_size - 1]
There is a wide range of implementations across architectures:
a) no access to I/O ports at all (tile, s390, ...)
b) access to I/O ports only through special instructions (x86, ...)
c) all MMIO is mapped virtual contiguous to PCI_IOBASE or _IO_BASE
(most ppc64, arm32 with MMU, arm64, ...)
d) only one PCI host can have an I/O space (mips, microblaze, ...)
e) each host controller can have its own method (ppc64 with indirect pio)
f) PIO token equals virtual address plus offset (some legacy ARM platforms,
probably some other architectures), or physical address (sparc)
g) PIO token encodes address space number plus offset (ia64)
a) and b) are trivially handled by any implementation that falls
back to 'return -EINVAL'.
I believe that c) is the most appropriate solution and we should be
able to adopt it by most of the architectures that have an MMU and
make it the default implementation.
d) seems like a good fallback for noMMU architectures: While we do
need to support I/O spaces, we need to support multiple PCI domains,
and we need to support noMMU, the combination of all three should
be extremely rare, and I'd leave it up to the architecture to support
that if there is a real use case, rather than trying to put that into
common code.
Anything that currently requires e), f) or g) I think should keep
doing that and not try to use the generic implementation.
Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists