[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ+HfNiX-VjPBQSNBbWpVTutT_o3qAz-XvtTJdKOsUvyLF3JRw@mail.gmail.com>
Date: Tue, 28 Aug 2018 19:42:57 +0200
From: Björn Töpel <bjorn.topel@...il.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: "Karlsson, Magnus" <magnus.karlsson@...el.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
"Duyck, Alexander H" <alexander.h.duyck@...el.com>,
Alexander Duyck <alexander.duyck@...il.com>, ast@...nel.org,
Daniel Borkmann <daniel@...earbox.net>,
Netdev <netdev@...r.kernel.org>,
"Brandeburg, Jesse" <jesse.brandeburg@...el.com>,
"Singhai, Anjali" <anjali.singhai@...el.com>,
peter.waskiewicz.jr@...el.com,
Björn Töpel <bjorn.topel@...el.com>,
michael.lundkvist@...csson.com,
Willem de Bruijn <willemdebruijn.kernel@...il.com>,
John Fastabend <john.fastabend@...il.com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
neerav.parikh@...el.com,
MykytaI Iziumtsev <mykyta.iziumtsev@...aro.org>,
Francois Ozog <francois.ozog@...aro.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Brian Brooks <brian.brooks@...aro.org>,
William Tu <u9012063@...il.com>, pavel@...tnetmon.com,
"Zhang, Qi Z" <qi.z.zhang@...el.com>
Subject: Re: [PATCH bpf-next 01/11] xdp: implement convert_to_xdp_frame for MEM_TYPE_ZERO_COPY
Den tis 28 aug. 2018 kl 16:11 skrev Jesper Dangaard Brouer <brouer@...hat.com>:
>
> On Tue, 28 Aug 2018 14:44:25 +0200
> Björn Töpel <bjorn.topel@...il.com> wrote:
>
> > From: Björn Töpel <bjorn.topel@...el.com>
> >
> > This commit adds proper MEM_TYPE_ZERO_COPY support for
> > convert_to_xdp_frame. Converting a MEM_TYPE_ZERO_COPY xdp_buff to an
> > xdp_frame is done by transforming the MEM_TYPE_ZERO_COPY buffer into a
> > MEM_TYPE_PAGE_ORDER0 frame. This is costly, and in the future it might
> > make sense to implement a more sophisticated thread-safe alloc/free
> > scheme for MEM_TYPE_ZERO_COPY, so that no allocation and copy is
> > required in the fast-path.
>
> This is going to be slow. Especially the dev_alloc_page() call, which
> for small frames is likely going to be slower than the data copy.
> I guess this is a good first step, but I do hope we will circle back and
> optimize this later. (It would also be quite easy to use
> MEM_TYPE_PAGE_POOL instead to get page recycling in devmap redirect case).
>
Yes, slow. :-( Still, I think this is a good starting point, and then
introduce a page pool in later performance oriented series to make XDP
faster for the AF_XDP scenario.
But I'm definitely on your side here; This need to be addressed -- but
not now IMO.
And thanks for spending time on the series!
Björn
> I would have liked the MEM_TYPE_ZERO_COPY frame to travel one level
> deeper into the redirect-core code. Allowing devmap to send these
> frame without copy, and allow cpumap to do the dev_alloc_page() call
> (+copy) on the remote CPU.
>
>
> > Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
> > ---
> > include/net/xdp.h | 5 +++--
> > net/core/xdp.c | 39 +++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 42 insertions(+), 2 deletions(-)
> >
> > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > index 76b95256c266..0d5c6fb4b2e2 100644
> > --- a/include/net/xdp.h
> > +++ b/include/net/xdp.h
> > @@ -91,6 +91,8 @@ static inline void xdp_scrub_frame(struct xdp_frame *frame)
> > frame->dev_rx = NULL;
> > }
> >
> > +struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp);
> > +
> > /* Convert xdp_buff to xdp_frame */
> > static inline
> > struct xdp_frame *convert_to_xdp_frame(struct xdp_buff *xdp)
> > @@ -99,9 +101,8 @@ struct xdp_frame *convert_to_xdp_frame(struct xdp_buff *xdp)
> > int metasize;
> > int headroom;
> >
> > - /* TODO: implement clone, copy, use "native" MEM_TYPE */
> > if (xdp->rxq->mem.type == MEM_TYPE_ZERO_COPY)
> > - return NULL;
> > + return xdp_convert_zc_to_xdp_frame(xdp);
> >
> > /* Assure headroom is available for storing info */
> > headroom = xdp->data - xdp->data_hard_start;
> > diff --git a/net/core/xdp.c b/net/core/xdp.c
> > index 89b6785cef2a..be6cb2f0e722 100644
> > --- a/net/core/xdp.c
> > +++ b/net/core/xdp.c
> > @@ -398,3 +398,42 @@ void xdp_attachment_setup(struct xdp_attachment_info *info,
> > info->flags = bpf->flags;
> > }
> > EXPORT_SYMBOL_GPL(xdp_attachment_setup);
> > +
> > +struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp)
> > +{
> > + unsigned int metasize, headroom, totsize;
> > + void *addr, *data_to_copy;
> > + struct xdp_frame *xdpf;
> > + struct page *page;
> > +
> > + /* Clone into a MEM_TYPE_PAGE_ORDER0 xdp_frame. */
> > + metasize = xdp_data_meta_unsupported(xdp) ? 0 :
> > + xdp->data - xdp->data_meta;
> > + headroom = xdp->data - xdp->data_hard_start;
> > + totsize = xdp->data_end - xdp->data + metasize;
> > +
> > + if (sizeof(*xdpf) + totsize > PAGE_SIZE)
> > + return NULL;
> > +
> > + page = dev_alloc_page();
> > + if (!page)
> > + return NULL;
> > +
> > + addr = page_to_virt(page);
> > + xdpf = addr;
> > + memset(xdpf, 0, sizeof(*xdpf));
> > +
> > + addr += sizeof(*xdpf);
> > + data_to_copy = metasize ? xdp->data_meta : xdp->data;
> > + memcpy(addr, data_to_copy, totsize);
> > +
> > + xdpf->data = addr + metasize;
> > + xdpf->len = totsize - metasize;
> > + xdpf->headroom = 0;
> > + xdpf->metasize = metasize;
> > + xdpf->mem.type = MEM_TYPE_PAGE_ORDER0;
> > +
> > + xdp_return_buff(xdp);
> > + return xdpf;
> > +}
> > +EXPORT_SYMBOL_GPL(xdp_convert_zc_to_xdp_frame);
>
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists