[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1195669377.6352.247.camel@bodhitayantram.eng.vmware.com>
Date: Wed, 21 Nov 2007 10:22:57 -0800
From: Zachary Amsden <zach@...are.com>
To: Avi Kivity <avi@...ranet.com>
Cc: Anthony Liguori <aliguori@...ibm.com>,
kvm-devel@...ts.sourceforge.net, virtualization@...ts.osdl.org,
linux-kernel@...r.kernel.org,
Eric Van Hensbergen <ericvanhensbergen@...ibm.com>
Subject: Re: [kvm-devel] [PATCH 3/3] virtio PCI device
On Wed, 2007-11-21 at 09:13 +0200, Avi Kivity wrote:
> Where the device is implemented is an implementation detail that should
> be hidden from the guest, isn't that one of the strengths of
> virtualization? Two examples: a file-based block device implemented in
> qemu gives you fancy file formats with encryption and compression, while
> the same device implemented in the kernel gives you a low-overhead path
> directly to a zillion-disk SAN volume. Or a user-level network device
> capable of running with the slirp stack and no permissions vs. the
> kernel device running copyless most of the time and using a dma engine
> for the rest but requiring you to be good friends with the admin.
>
> The user should expect zero reconfigurations moving a VM from one model
> to the other.
I think that is pretty insightful, and indeed, is probably the only
reason we would ever consider using a virtio based driver.
But is this really a virtualization problem, and is virtio the right
place to solve it? Doesn't I/O hotplug with multipathing or NIC teaming
provide the same infrastructure in a way that is useful in more than
just a virtualization context?
Zach
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists