[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AAF95D1.1080600@redhat.com>
Date: Tue, 15 Sep 2009 16:25:37 +0300
From: Avi Kivity <avi@...hat.com>
To: Gregory Haskins <gregory.haskins@...il.com>
CC: "Michael S. Tsirkin" <mst@...hat.com>,
"Ira W. Snyder" <iws@...o.caltech.edu>, netdev@...r.kernel.org,
virtualization@...ts.linux-foundation.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, mingo@...e.hu, linux-mm@...ck.org,
akpm@...ux-foundation.org, hpa@...or.com,
Rusty Russell <rusty@...tcorp.com.au>, s.hetze@...ux-ag.com,
alacrityvm-devel@...ts.sourceforge.net
Subject: Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server
On 09/15/2009 04:03 PM, Gregory Haskins wrote:
>
>> In this case the x86 is the owner and the ppc boards use translated
>> access. Just switch drivers and device and it falls into place.
>>
>>
> You could switch vbus roles as well, I suppose.
Right, there's not real difference in this regard.
> Another potential
> option is that he can stop mapping host memory on the guest so that it
> follows the more traditional model. As a bus-master device, the ppc
> boards should have access to any host memory at least in the GFP_DMA
> range, which would include all relevant pointers here.
>
> I digress: I was primarily addressing the concern that Ira would need
> to manage the "host" side of the link using hvas mapped from userspace
> (even if host side is the ppc boards). vbus abstracts that access so as
> to allow something other than userspace/hva mappings. OTOH, having each
> ppc board run a userspace app to do the mapping on its behalf and feed
> it to vhost is probably not a huge deal either. Where vhost might
> really fall apart is when any assumptions about pageable memory occur,
> if any.
>
Why? vhost will call get_user_pages() or copy_*_user() which ought to
do the right thing.
> As an aside: a bigger issue is that, iiuc, Ira wants more than a single
> ethernet channel in his design (multiple ethernets, consoles, etc). A
> vhost solution in this environment is incomplete.
>
Why? Instantiate as many vhost-nets as needed.
> Note that Ira's architecture highlights that vbus's explicit management
> interface is more valuable here than it is in KVM, since KVM already has
> its own management interface via QEMU.
>
vhost-net and vbus both need management, vhost-net via ioctls and vbus
via configfs. The only difference is the implementation. vhost-net
leaves much more to userspace, that's the main difference.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists