lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 8 Jul 2014 15:29:51 -0600
From:	Bjorn Helgaas <bhelgaas@...gle.com>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	Liviu Dudau <Liviu.Dudau@....com>,
	linux-pci <linux-pci@...r.kernel.org>,
	Catalin Marinas <Catalin.Marinas@....com>,
	Will Deacon <Will.Deacon@....com>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>,
	linaro-kernel <linaro-kernel@...ts.linaro.org>,
	Tanmay Inamdar <tinamdar@....com>,
	Grant Likely <grant.likely@...retlab.ca>,
	Sinan Kaya <okaya@...eaurora.org>,
	Jingoo Han <jg1.han@...sung.com>,
	Kukjin Kim <kgene.kim@...sung.com>,
	Suravee Suthikulanit <suravee.suthikulpanit@....com>,
	LKML <linux-kernel@...r.kernel.org>,
	Device Tree ML <devicetree@...r.kernel.org>,
	LAKML <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v8 3/9] pci: Introduce pci_register_io_range() helper
 function.

On Tue, Jul 8, 2014 at 1:00 AM, Arnd Bergmann <arnd@...db.de> wrote:
> On Tuesday 08 July 2014, Bjorn Helgaas wrote:
>> On Tue, Jul 01, 2014 at 07:43:28PM +0100, Liviu Dudau wrote:
>> > +static LIST_HEAD(io_range_list);
>> > +
>> > +/*
>> > + * Record the PCI IO range (expressed as CPU physical address + size).
>> > + * Return a negative value if an error has occured, zero otherwise
>> > + */
>> > +int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
>>
>> I don't understand the interface here.  What's the mapping from CPU
>> physical address to bus I/O port?  For example, I have the following
>> machine in mind:
>>
>>   HWP0002:00: PCI Root Bridge (domain 0000 [bus 00-1b])
>>   HWP0002:00: memory-mapped IO port space [mem 0xf8010000000-0xf8010000fff]
>>   HWP0002:00: host bridge window [io  0x0000-0x0fff]
>>
>>   HWP0002:09: PCI Root Bridge (domain 0001 [bus 00-1b])
>>   HWP0002:09: memory-mapped IO port space [mem 0xf8110000000-0xf8110000fff]
>>   HWP0002:09: host bridge window [io  0x1000000-0x1000fff] (PCI address [0x0-0xfff])
>>
>> The CPU physical memory [mem 0xf8010000000-0xf8010000fff] is translated by
>> the bridge to I/O ports 0x0000-0x0fff on PCI bus 0000:00.  Drivers use,
>> e.g., "inb(0)" to access it.
>>
>> Similarly, [mem 0xf8110000000-0xf8110000fff] is translated by the second
>> bridge to I/O ports 0x0000-0x0fff on PCI bus 0001:00.  Drivers use
>> "inb(0x1000000)" to access it.
>
> I guess you are thinking of the IA64 model here where you keep the virtual
> I/O port numbers in a per-bus lookup table that gets accessed for each
> inb() call. I've thought about this some more, and I believe there are good
> reasons for sticking with the model used on arm32 and powerpc for the
> generic OF implementation.
>
> The idea is that there is a single virtual memory range for all I/O port
> mappings and we use the MMU to do the translation rather than computing
> it manually in the inb() implemnetation. The main advantage is that all
> functions used in device drivers to (potentially) access I/O ports
> become trivial this way, which helps for code size and in some cases
> (e.g. SoC-internal registers with a low latency) it may even be performance
> relevant.

My example is from ia64, but I'm not advocating for the lookup table.
The point is that the hardware works similarly (at least for dense ia64
I/O port spaces) in terms of mapping CPU physical addresses to PCI I/O
space.

I think my confusion is because your pci_register_io_range() and
pci_addess_to_pci() implementations assume that every io_range starts at
I/O port 0 on PCI (correct me if I'm wrong).  I suspect that's why you
don't save the I/O port number in struct io_range.

Maybe that assumption is guaranteed by OF, but it doesn't hold for ACPI;
ACPI can describe several I/O port apertures for a single bridge, each
associated with a different CPU physical memory region.

If my speculation here is correct, a comment to the effect that each
io_range corresponds to a PCI I/O space range that starts at 0 might be
enough.

If you did add a PCI I/O port number argument to pci_register_io_range(),
we might be able to make an ACPI-based implementation of it.  But I guess
that could be done if/when anybody ever wants to do that.

>> Here's what these look like in /proc/iomem and /proc/ioports (note that
>> there are two resource structs for each memory-mapped IO port space: one
>> IORESOURCE_MEM for the memory-mapped area (used only by the host bridge
>> driver), and one IORESOURCE_IO for the I/O port space (this becomes the
>> parent of a region used by a regular device driver):
>>
>>   /proc/iomem:
>>     PCI Bus 0000:00 I/O Ports 00000000-00000fff
>>     PCI Bus 0001:00 I/O Ports 01000000-01000fff

Oops, I forgot the actual physical memory addresses here, but you got
the idea anyway.  It should have been something like this:

  /proc/iomem:
    f8010000000-f8010000fff PCI Bus 0000:00 I/O Ports 00000000-00000fff
    f8110000000-f8110000fff PCI Bus 0001:00 I/O Ports 01000000-01000fff

Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ