[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <F2E9EB7348B8264F86B6AB8151CE2D79026FAE0BDB@shsmsx502.ccr.corp.intel.com>
Date: Thu, 22 Apr 2010 16:57:56 +0800
From: "Xin, Xiaohui" <xiaohui.xin@...el.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"mingo@...e.hu" <mingo@...e.hu>,
"jdike@...ux.intel.com" <jdike@...ux.intel.com>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: RE: [RFC][PATCH v2 0/3] Provide a zero-copy method on KVM
virtio-net.
Michael,
>Yes, I think this packet split mode probably maps well to mergeable buffer
>support. Note that
>1. Not all devices support large packets in this way, others might map
> to indirect buffers better
Do the indirect buffers accord to deal with the skb->frag_list?
> So we have to figure out how migration is going to work
Yes, different guest virtio-net driver may contain different features.
Does the qemu migration work with different features supported by virtio-net
driver now?
>2. It's up to guest driver whether to enable features such as
> mergeable buffers and indirect buffers
> So we have to figure out how to notify guest which mode
> is optimal for a given device
Yes. When a device is binded, the mp device may query the capabilities from driver.
Actually, there is a structure now in mp device can do this, we can add some field
to support more.
>3. We don't want to depend on jumbo frames for decent performance
> So we probably should support GSO/GRO
GSO is for the tx side, right? I think driver can handle it itself.
For GRO, I'm not sure it's easy or not. Basically, the mp device now
we have support is doing what raw socket is doing. The packets are not going to host stack.
--
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists