[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130902075722.GZ15729@zion.uk.xensource.com>
Date: Mon, 2 Sep 2013 08:57:22 +0100
From: Wei Liu <wei.liu2@...rix.com>
To: Qin Chuanyu <qinchuanyu@...wei.com>
CC: Anthony Liguori <anthony@...emonkey.ws>,
"Michael S. Tsirkin" <mst@...hat.com>, <jasowang@...hat.com>,
KVM list <kvm@...r.kernel.org>, <netdev@...r.kernel.org>,
<qianhuibin@...wei.com>,
"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
<wangfuhai@...wei.com>, <likunyun@...wei.com>,
<liuyongan@...wei.com>, <liuyingdong@...wei.com>,
<wei.liu2@...rix.com>
Subject: Re: Is fallback vhost_net to qemu for live migrate available?
On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
> >Hi Qin,
>
> >>By change the memory copy and notify mechanism ,currently virtio-net with
> >>vhost_net could run on Xen with good performance。
> >
> >I think the key in doing this would be to implement a property
> >ioeventfd and irqfd interface in the driver domain kernel. Just
> >hacking vhost_net with Xen specific knowledge would be pretty nasty
> >IMHO.
> >
> Yes, I add a kernel module which persist virtio-net pio_addr and
> msix address as what kvm module did. Guest wake up vhost thread by
> adding a hook func in evtchn_interrupt.
>
> >Did you modify the front end driver to do grant table mapping or is
> >this all being done by mapping the domain's memory?
> >
> There is nothing changed in front end driver. Currently I use
> alloc_vm_area to get address space, and map the domain's memory as
> what what qemu did.
>
You mean you're using xc_map_foreign_range and friends in the backend to
map guest memory? That's not very desirable as it violates Xen's
security model. It would not be too hard to pass grant references
instead of guest physical memory address IMHO.
Wei.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists