lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220415183912.GA824311@bhelgaas>
Date:   Fri, 15 Apr 2022 13:39:12 -0500
From:   Bjorn Helgaas <helgaas@...nel.org>
To:     "Maciej W. Rozycki" <macro@...am.me.uk>
Cc:     Palmer Dabbelt <palmer@...belt.com>,
        Bjorn Helgaas <bhelgaas@...gle.com>, linux-pci@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] PCI: Avoid handing out address 0 to devices

On Fri, Apr 15, 2022 at 01:27:17PM +0100, Maciej W. Rozycki wrote:
> On Thu, 14 Apr 2022, Bjorn Helgaas wrote:
> 
> > > > > > > Address 0 is treated specially however in many places, for
> > > > > > > example in `pci_iomap_range' and `pci_iomap_wc_range' we
> > > > > > > require that the start address is non-zero, and even if we
> > > > > > > let such an address through, then individual device drivers
> > > > > > > could reject a request to handle a device at such an
> > > > > > > address, such as in `uart_configure_port'.  Consequently
> > > > > > > given devices configured as shown above only one is actually
> > > > > > > usable:
> > > > > > 
> > > > > > pci_iomap_range() tests the resource start, i.e., the CPU
> > > > > > address.  I guess the implication is that on RISC-V, the
> > > > > > CPU-side port address is the same as the PCI bus port address?
> > > > > 
> > > > >  Umm, for all systems I came across except x86, which have
> > > > >  native port I/O access machine instructions, a port I/O
> > > > >  resource records PCI bus addresses of the device rather than
> > > > >  its CPU addresses, which encode the location of an MMIO window
> > > > >  the PCI port I/O space is accessed through.
> > > > 
> > > > My point is only that it is not necessary for the PCI bus address
> > > > and the struct resource address, i.e., the argument to inb(), to
> > > > be the same.
> > > 
> > >  Sure, but I have yet to see a system where it is the case.
> > > 
> > >  Also in principle peer PCI buses could have their own port I/O
> > >  address spaces each mapped via distinct MMIO windows in the CPU
> > >  address space, but I haven't heard of such a system.  That of
> > >  course doesn't mean there's no such system in existence.
> > 
> > They do exist, but are probably rare.  Even on x86 where multiple host
> > bridges are now fairly common, and the hardware probably supports a
> > separate 64K port space for each, the ones I've seen split up a single
> > 64K I/O port space so each bridge only gets a fraction of it.  I'm not
> > sure Linux would even support multiple spaces.  I do know ia64
> > supports multiple port spaces (see __ia64_mk_io_addr()), so we could
> > have something like this:
> > 
> >   pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
> >   pci_bus 0001:00: root bus resource [io  0x10000-0x1ffff] (bus address [0x0000-0xffff])
> 
>  Yeah, I guess if anything, it *had* to be IA64!

:)

>  Conversely Alpha systems decode the full 32-bit address range for port 
> I/O and happily assign I/O bars beyond 64K in their firmware, however as 
> a uniform address space even across several peer PCI buses.
> 
>  As to x86 systems as I mentioned they have native port I/O access machine
> instructions and they only support 16-bit addressing, so I wouldn't expect 
> more than one 64K of port I/O space implemented with them.  There is no 
> problem at the CPU bus level of course with presenting port I/O addresses 
> beyond 64K and as a matter of interest the original 80386 CPU did make use 
> of them internally for communicating with the 80387 FPU, just because they 
> cannot be produced with machine code and therefore a programmer could not 
> interfere with the CPU-to-FPU communication protocol.  Port I/O locations
> 0x800000f8 through 0x800000ff were actually used in that protocol[1][2].

> > I guess the question is whether we want to reserve port 0 and MMIO
> > address 0 as being "invalid".  That makes the first space smaller than
> > the others, but it's not *much* smaller and it's an unlikely
> > configuration to begin with.
> 
>  Unfortunately just as IRQ 0 is considered special and barring the 8254 
> special exception for PC-style legacy junk it means "no IRQ", similarly 
> I/O port or MMIO address 0 is considered "no device" in several places.  
> One I have identified as noted above is `uart_configure_port':
> 
> 	/*
> 	 * If there isn't a port here, don't do anything further.
> 	 */
> 	if (!port->iobase && !port->mapbase && !port->membase)
> 		return;
> 
> So even if we let address 0 through it will be rejected downstream here 
> and there and the device won't work.

This is a driver question, which I think is secondary.  If necessary,
we can fix drivers after figuring out what the PCI core should do.

> > We do have the IORESOURCE_UNSET flag bit that could possibly be used
> > in pci_iomap_range() instead of testing for "!start".  Or maybe
> > there's a way to allocate address 0 instead of special-casing the
> > allocator?  Just thinking out loud here.

Another possibility is PCIBIOS_MIN_IO.  It's also kind of an ugly
special case, but at least it already exists.  Most arches define it
to be non-zero, which should avoid this issue.

Defining PCIBIOS_MIN_IO would be simple; what would we lose compared
to adding code in pci_bus_alloc_from_region()?

Bjorn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ