[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191127154014.2b91ecc2@cakuba.netronome.com>
Date: Wed, 27 Nov 2019 15:40:14 -0800
From: Jakub Kicinski <jakub.kicinski@...ronome.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Prashant Bhole <prashantbhole.linux@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Jason Wang <jasowang@...hat.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
Andrii Nakryiko <andriin@...com>, netdev@...r.kernel.org,
qemu-devel@...gnu.org, kvm@...r.kernel.org
Subject: Re: [RFC net-next 00/18] virtio_net XDP offload
On Wed, 27 Nov 2019 15:32:17 -0500, Michael S. Tsirkin wrote:
> On Tue, Nov 26, 2019 at 12:35:14PM -0800, Jakub Kicinski wrote:
> > On Tue, 26 Nov 2019 19:07:26 +0900, Prashant Bhole wrote:
> > > Note: This RFC has been sent to netdev as well as qemu-devel lists
> > >
> > > This series introduces XDP offloading from virtio_net. It is based on
> > > the following work by Jason Wang:
> > > https://netdevconf.info/0x13/session.html?xdp-offload-with-virtio-net
> > >
> > > Current XDP performance in virtio-net is far from what we can achieve
> > > on host. Several major factors cause the difference:
> > > - Cost of virtualization
> > > - Cost of virtio (populating virtqueue and context switching)
> > > - Cost of vhost, it needs more optimization
> > > - Cost of data copy
> > > Because of above reasons there is a need of offloading XDP program to
> > > host. This set is an attempt to implement XDP offload from the guest.
> >
> > This turns the guest kernel into a uAPI proxy.
> >
> > BPF uAPI calls related to the "offloaded" BPF objects are forwarded
> > to the hypervisor, they pop up in QEMU which makes the requested call
> > to the hypervisor kernel. Today it's the Linux kernel tomorrow it may
> > be someone's proprietary "SmartNIC" implementation.
> >
> > Why can't those calls be forwarded at the higher layer? Why do they
> > have to go through the guest kernel?
>
> Well everyone is writing these programs and attaching them to NICs.
Who's everyone?
> For better or worse that's how userspace is written.
HW offload requires modifying the user space, too. The offload is not
transparent. Do you know that?
> Yes, in the simple case where everything is passed through, it could
> instead be passed through some other channel just as well, but then
> userspace would need significant changes just to make it work with
> virtio.
There is a recently spawned effort to create an "XDP daemon" or
otherwise a control application which would among other things link
separate XDP apps to share a NIC attachment point.
Making use of cloud APIs would make a perfect addition to that.
Obviously if one asks a kernel guy to solve a problem one'll get kernel
code as an answer. And writing higher layer code requires companies to
actually organize their teams and have "full stack" strategies.
We've seen this story already with net_failover wart. At least that
time we weren't risking building a proxy to someone's proprietary FW.
> > If kernel performs no significant work (or "adds value", pardon the
> > expression), and problem can easily be solved otherwise we shouldn't
> > do the work of maintaining the mechanism.
> >
> > The approach of kernel generating actual machine code which is then
> > loaded into a sandbox on the hypervisor/SmartNIC is another story.
>
> But that's transparent to guest userspace. Making userspace care whether
> it's a SmartNIC or a software device breaks part of virtualization's
> appeal, which is that it looks like a hardware box to the guest.
It's not hardware unless you JITed machine code for it, it's just
someone else's software.
I'm not arguing with the appeal. I'm arguing the risk/benefit ratio
doesn't justify opening this can of worms.
> > I'd appreciate if others could chime in.
Powered by blists - more mailing lists