[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAK8P3a0o1rd4BiYwGq_JnWthBG11rxCevKE3+x3fE-S2EnbTxg@mail.gmail.com>
Date: Mon, 11 Jul 2022 08:05:01 +0200
From: Arnd Bergmann <arnd@...db.de>
To: Stafford Horne <shorne@...il.com>
Cc: Arnd Bergmann <arnd@...db.de>, LKML <linux-kernel@...r.kernel.org>,
Openrisc <openrisc@...ts.librecores.org>,
Jonas Bonn <jonas@...thpole.se>,
Stefan Kristiansson <stefan.kristiansson@...nalahti.fi>,
Peter Zijlstra <peterz@...radead.org>,
Palmer Dabbelt <palmer@...osinc.com>
Subject: Re: [PATCH 1/2] openrisc: Add pci bus support
On Sun, Jul 10, 2022 at 11:22 PM Stafford Horne <shorne@...il.com> wrote:
> On Sun, Jul 10, 2022 at 05:54:22PM +0200, Arnd Bergmann wrote:
>
> OK, I see, but I think IO_SPACE_LIMIT needs to be defined as something other
> than 0. It is used to define kernel/resource.c's ioport_resource.
I think the kernel/resource.c one is fine, it just means that any attempt to
register an I/O port resource below the 0-length root resource will fail, which
is what you need when inb/outb cannot be used.
> For example on risc-v they set it to 16MB.
>
> I will setup a LIMIT smaller than 4GB and add a PCI_IOBASE.
Do you support multiple PCI domains? Usually you want at most 64KB
per domain, as that is the traditional limit and what the normal
pci_remap_iospace() will assign to a domain. The 16MB limit for riscv
is way more than what one may need on a 32-bit machine, since that
is enough for 4096 domains even with the largest possible I/O space,
and each domain has up to 65536 PCI functions attached to it.
> > Most PCI controller are however able to map I/O ports into the
> > physical address space of the CPU, and in that case you can just
> > define an otherwise unused address as PCI_IOBASE and map the
> > ports there from the PCI host bridge driver.
>
> OK, understood, do you think this needs to be documented in a architecture
> manual? Maybe it's fine for it to be linux specific.
Of course it's Linux specific, but it's also architecture specific since
there are different ways of making I/O space available: Generally you
can leave it out completely, unless you have to support devices from
two decades ago, some architectures that existed back then have custom
instructions, some hardcode part of the virtual address space to access
MMIO registers at a fixed location, some rely on an indirect method
going through a particular MMIO register to access all I/O space, and
some use a per hostbridge window that gets mapped using
pci_remap_iospace().
Do you have a driver for your host bridge available somewhere?
It should be clear from that driver which method you need.
Arnd
Powered by blists - more mailing lists