[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C85CEDA13AB1CF4D9D597824A86D2B9006AEB9477C@PDSMSX501.ccr.corp.intel.com>
Date: Tue, 1 Sep 2009 13:04:58 +0800
From: "Xin, Xiaohui" <xiaohui.xin@...el.com>
To: Avi Kivity <avi@...hat.com>
CC: "mst@...hat.com" <mst@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...e.hu" <mingo@...e.hu>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"hpa@...or.com" <hpa@...or.com>,
"gregory.haskins@...il.com" <gregory.haskins@...il.com>
Subject: RE: [PATCHv5 3/3] vhost_net: a kernel-level virtio server
> One way to share the effort is to make vmdq queues available as normal
kernel interfaces. It would take quite a bit of work, but the end
result is that no other components need to be change, and it makes vmdq
useful outside kvm. It also greatly reduces the amount of integration
work needed throughout the stack (kvm/qemu/libvirt).
Yes. The common queue pair interface which we want to present will also apply to normal hardware, and try to leave other components unknown.
Thanks
Xiaohui
-----Original Message-----
From: Avi Kivity [mailto:avi@...hat.com]
Sent: Tuesday, September 01, 2009 1:52 AM
To: Xin, Xiaohui
Cc: mst@...hat.com; netdev@...r.kernel.org; virtualization@...ts.linux-foundation.org; kvm@...r.kernel.org; linux-kernel@...r.kernel.org; mingo@...e.hu; linux-mm@...ck.org; akpm@...ux-foundation.org; hpa@...or.com; gregory.haskins@...il.com
Subject: Re: [PATCHv5 3/3] vhost_net: a kernel-level virtio server
On 08/31/2009 02:42 PM, Xin, Xiaohui wrote:
> Hi, Michael
> That's a great job. We are now working on support VMDq on KVM, and since the VMDq hardware presents L2 sorting based on MAC addresses and VLAN tags, our target is to implement a zero copy solution using VMDq. We stared from the virtio-net architecture. What we want to proposal is to use AIO combined with direct I/O:
> 1) Modify virtio-net Backend service in Qemu to submit aio requests composed from virtqueue.
> 2) Modify TUN/TAP device to support aio operations and the user space buffer directly mapping into the host kernel.
> 3) Let a TUN/TAP device binds to single rx/tx queue from the NIC.
> 4) Modify the net_dev and skb structure to permit allocated skb to use user space directly mapped payload buffer address rather then kernel allocated.
>
> As zero copy is also your goal, we are interested in what's in your mind, and would like to collaborate with you if possible.
>
One way to share the effort is to make vmdq queues available as normal
kernel interfaces. It would take quite a bit of work, but the end
result is that no other components need to be change, and it makes vmdq
useful outside kvm. It also greatly reduces the amount of integration
work needed throughout the stack (kvm/qemu/libvirt).
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists