[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1675651276.3841548-3-xuanzhuo@linux.alibaba.com>
Date: Mon, 6 Feb 2023 10:41:16 +0800
From: Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Jason Wang <jasowang@...hat.com>,
Björn Töpel <bjorn@...nel.org>,
Magnus Karlsson <magnus.karlsson@...el.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Menglong Dong <imagedong@...cent.com>,
Kuniyuki Iwashima <kuniyu@...zon.com>,
Petr Machata <petrm@...dia.com>,
virtualization@...ts.linux-foundation.org, bpf@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
On Fri, 3 Feb 2023 04:17:59 -0500, "Michael S. Tsirkin" <mst@...hat.com> wrote:
> On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> > On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@...hat.com> wrote:
> > > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > feature.
> > > >
> > > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > > the XDP Socket Zerocopy.
> > > >
> > > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > > kernel.
> > > >
> > > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > > local CPU, then we wake up sofrirqd.
> > >
> > > Thank you for the large effort.
> > >
> > > Since this will likely need a few iterations, on next revision please
> > > do split the work in multiple chunks to help the reviewer efforts -
> > > from Documentation/process/maintainer-netdev.rst:
> > >
> > > - don't post large series (> 15 patches), break them up
> > >
> > > In this case I guess you can split it in 1 (or even 2) pre-req series
> > > and another one for the actual xsk zero copy support.
> >
> >
> > OK.
> >
> > I can split patch into multiple parts such as
> >
> > * virtio core
> > * xsk
> > * virtio-net prepare
> > * virtio-net support xsk zerocopy
> >
> > However, there is a problem, the virtio core part should enter the VHOST branch
> > of Michael. Then, should I post follow-up patches to which branch vhost or
> > next-next?
> >
> > Thanks.
> >
>
> Here are some ideas on how to make the patchset smaller
> and easier to merge:
> - keep everything in virtio_net.c for now. We can split
> things out later, but this way your patchset will not
> conflict with every since change merged meanwhile.
> Also, split up needs to be done carefully with sane
> APIs between components, let's maybe not waste time
> on that now, do the split-up later.
> - you have patches that add APIs then other
> patches use them. as long as it's only virtio net just
> add and use in a single patch, review is actually easier this way.
I will try to merge #16-#18 and #20-#23.
> - we can try merging pre-requisites earlier, then patchset
> size will shrink.
Do you mean the patches of virtio core? Should we put these
patches to vhost branch?
Thanks.
>
>
> > >
> > > Thanks!
> > >
> > > Paolo
> > >
>
Powered by blists - more mailing lists