[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a8dca7bc-75c8-e21e-a40e-f09d7b47c523@intel.com>
Date: Mon, 22 Jan 2018 13:05:15 -0800
From: "Samudrala, Sridhar" <sridhar.samudrala@...el.com>
To: Siwei Liu <loseweigh@...il.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>,
Stephen Hemminger <stephen@...workplumber.org>,
David Miller <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>,
virtualization@...ts.linux-foundation.org,
virtio-dev@...ts.oasis-open.org,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
Alexander Duyck <alexander.h.duyck@...el.com>,
Jakub Kicinski <kubakici@...pl>
Subject: Re: [RFC PATCH net-next v2 2/2] virtio_net: Extend virtio to use VF
datapath when available
On 1/22/2018 12:27 PM, Siwei Liu wrote:
> First off, as mentioned in another thread, the model of stacking up
> virt-bond functionality over virtio seems a wrong direction to me.
> Essentially the migration process would need to carry over all guest
> side configurations previously done on the VF/PT and get them moved to
> the new device being it virtio or VF/PT. Without the help of a new
> upper layer bond driver that enslaves virtio and VF/PT devices
> underneath, virtio will be overloaded with too much specifics being a
> VF/PT backup in the future. I hope you're already aware of the issue
> in longer term and move to that model as soon as possible. See more
> inline.
The idea behind this design is to provide a low latency datapath to
virtio_net while
preserving live migration feature without the need for the guest admin
to configure
a bond between VF and virtio_net.
As this feature is enabled and configured via virtio_net which has a
back channel to
the hypervisor, adding this functionality to virtio_net looks like a
reasonable option.
Adding a new driver and a new device requires defining a new interface
and a channel
between the hypervisor and the VM and if required we may implement that in
future.
>
> On Thu, Jan 11, 2018 at 9:58 PM, Sridhar Samudrala
> <sridhar.samudrala@...el.com> wrote:
>> This patch enables virtio_net to switch over to a VF datapath when a VF
>> netdev is present with the same MAC address. The VF datapath is only used
>> for unicast traffic. Broadcasts/multicasts go via virtio datapath so that
>> east-west broadcasts don't use the PCI bandwidth.
> Why not making an this an option/optimization rather than being the
> only means? The problem of east-west broadcast eating PCI bandwidth
> depends on specifics of the (virtual) network setup, while some users
> won't want to lose VF's merits such as latency. Why restricting
> broadcast/multicast xmit to virtio only which potentially regresses
> the performance against raw VF?
I am planning to remove this option when i resubmit the patches.
>
>> It allows live migration
>> of a VM with a direct attached VF without the need to setup a bond/team
>> between a VF and virtio net device in the guest.
>>
>> The hypervisor needs to unplug the VF device from the guest on the source
>> host and reset the MAC filter of the VF to initiate failover of datapath to
>> virtio before starting the migration. After the migration is completed, the
>> destination hypervisor sets the MAC filter on the VF and plugs it back to
>> the guest to switch over to VF datapath.
> Is there a host side patch (planned) for this MAC filter switching
> process? As said in another thread, that simple script won't work for
> macvtap backend.
The host side patch to enable qemu to configure this feature is included
in this patch
series.
I have been testing this feature using a shell script, but i hope
someone in the libvirt
community will extend 'virsh' to handle live migration when this
feature is supported.
Thanks
Sridhar
Powered by blists - more mailing lists