[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <474BDD28.7050801@qumranet.com>
Date: Tue, 27 Nov 2007 11:02:32 +0200
From: Avi Kivity <avi@...ranet.com>
To: Anthony Liguori <aliguori@...ibm.com>
CC: Eric Van Hensbergen <ericvanhensbergen@...ibm.com>,
lguest <lguest@...abs.org>, kvm-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, virtualization@...ts.osdl.org
Subject: Re: [kvm-devel] [PATCH 3/3] virtio PCI device
Anthony Liguori wrote:
>> Another point is that virtio still has a lot of leading zeros in its
>> mileage counter. We need to keep things flexible and learn from
>> others as much as possible, especially when talking about the ABI.
>
> Yes, after thinking about it over holiday, I agree that we should at
> least introduce a virtio-pci feature bitmask. I'm not inclined to
> attempt to define a hypercall ABI or anything like that right now but
> having the feature bitmask will at least make it possible to do such a
> thing in the future.
No, definitely not define a hypercall ABI. The feature bit should say
"this device understands a hypervisor-specific way of kicking. consult
your hypervisor manual and cpuid bits for further details. should you
not be satisfied with this method, port io is still available".
>
>>> I'm wary of introducing the notion of hypercalls to this device
>>> because it makes the device VMM specific. Maybe we could have the
>>> device provide an option ROM that was treated as the device "BIOS"
>>> that we could use for kicking and interrupt acking? Any idea of how
>>> that would map to Windows? Are there real PCI devices that use the
>>> option ROM space to provide what's essentially firmware?
>>> Unfortunately, I don't think an option ROM BIOS would map well to
>>> other architectures.
>>>
>>>
>>
>> The BIOS wouldn't work even on x86 because it isn't mapped to the
>> guest address space (at least not consistently), and doesn't know the
>> guest's programming model (16, 32, or 64-bits? segmented or flat?)
>>
>> Xen uses a hypercall page to abstract these details out. However, I'm
>> not proposing that. Simply indicate that we support hypercalls, and
>> use some layer below to actually send them. It is the responsibility
>> of this layer to detect if hypercalls are present and how to call them.
>>
>> Hey, I think the best place for it is in paravirt_ops. We can even
>> patch the hypercall instruction inline, and the driver doesn't need
>> to know about it.
>
> Yes, paravirt_ops is attractive for abstracting the hypercall calling
> mechanism but it's still necessary to figure out how hypercalls would
> be identified. I think it would be necessary to define a virtio
> specific hypercall space and use the virtio device ID to claim subspaces.
>
> For instance, the hypercall number could be (virtio_devid << 16) |
> (call number). How that translates into a hypercall would then be
> part of the paravirt_ops abstraction. In KVM, we may have a single
> virtio hypercall where we pass the virtio hypercall number as one of
> the arguments or something like that.
If we don't call it a hypercall, but a virtio kick operation, we don't
need to worry about the hypercall number or ABI. It's just a function
that takes an argument that's implemented differently by every
hypervisor. The default implementation can be a pio operation.
>>>> Make it appear as a pci function? (though my feeling is that
>>>> multiple mounts should be different devices; we can then hotplug
>>>> mountpoints).
>>>>
>>>
>>> We may run out of PCI slots though :-/
>>>
>>
>> Then we can start selling virtio extension chassis.
>
> :-) Do you know if there is a hard limit on the number of devices on
> a PCI bus? My concern was that it was limited by something stupid
> like an 8-bit identifier.
IIRC pci slots are 8-bit, but you can have multiple buses, so
effectively 16 bits of device address space (discounting functions which
are likely not hot-pluggable).
--
error compiling committee.c: too many arguments to function
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists