[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4DA11BEE.1080500@codemonkey.ws>
Date: Sat, 09 Apr 2011 21:54:38 -0500
From: Anthony Liguori <anthony@...emonkey.ws>
To: Olivier Galibert <galibert@...ox.com>
CC: Pekka Enberg <penberg@...nel.org>, Ingo Molnar <mingo@...e.hu>,
Avi Kivity <avi@...hat.com>, linux-kernel@...r.kernel.org,
aarcange@...hat.com, mtosatti@...hat.com, kvm@...r.kernel.org,
joro@...tes.org, penberg@...helsinki.fi, asias.hejun@...il.com,
gorcunov@...il.com
Subject: Re: [ANNOUNCE] Native Linux KVM tool
On 04/09/2011 01:23 PM, Olivier Galibert wrote:
> On Fri, Apr 08, 2011 at 09:00:43AM -0500, Anthony Liguori wrote:
>> Really, having a flat table doesn't make sense. You should just send
>> everything to an i440fx directly. Then the i440fx should decode what it
>> can, and send it to the next level, and so forth.
> No you shouldn't. The i440fx should merge and arbitrate the mappings
> and then push *direct* links to the handling functions at the top
> level. Mapping changes don't happen often on modern hardware, and
> decoding is expensive.
Decoding is not all that expensive. For non-PCI devices, the addresses
are almost always fixed so it becomes a series of conditionals and
function calls with a length of no more than 3 or 4.
For PCI devices, any downstream devices are going to fall into specific
regions that the bridge registers. Even in the pathological case of a
bus populated with 32 multi-function devices each having 6 bars, it's
still a non-overlapping list of ranges. There's nothing that prevents
you from storing a sorted version of the list such that you can binary
search to the proper dispatch device. Binary searching a list of 1500
entries is quite fast.
In practice, you have no more than 10-20 PCI devices with each device
having 2-3 bars. A simple linear search is not going to have a
noticeable overhead.
> Incidentally, you can have special handling
> functions which are in reality references to kernel handlers,
> shortcutting userspace entirely for critical ports/mmio ranges.
The cost here is the trip from the guest to userspace and back. If you
want to short cut in the kernel, you have to do that *before* returning
to userspace. In that case, how userspace models I/O flow doesn't matter.
The reason flow matters is that PCI controllers alter I/O. Most PCI
devices use little endian for device registers and some big endian
oriented buses will automatically do endian conversion.
Even without those types of controllers, if you use a native endian API,
an MMIO dispatch API is going to do endian conversion to the target
architecture. However, if you're expecting to return the data in little
endian (as PCI registers are expected to usually be), you need to flip
the endianness.
In QEMU, we handle this by registering bars with a function pointer
trampoline to do this. But this is with the special API. If you hook
the mapping API, you'll probably get this wrong.
Regards,
Anthony Liguori
> OG.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists