[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180308163030.5a955cb8@redhat.com>
Date: Thu, 8 Mar 2018 16:30:30 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Alexander Duyck <alexander.duyck@...il.com>
Cc: Netdev <netdev@...r.kernel.org>,
BjörnTöpel <bjorn.topel@...el.com>,
"Karlsson, Magnus" <magnus.karlsson@...el.com>,
Eugenia Emantayev <eugenia@...lanox.com>,
Jason Wang <jasowang@...hat.com>,
John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>,
Gal Pressman <galp@...lanox.com>,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>, brouer@...hat.com
Subject: Re: [bpf-next V2 PATCH 00/15] XDP redirect memory return API
On Thu, 8 Mar 2018 07:03:07 -0800
Alexander Duyck <alexander.duyck@...il.com> wrote:
> On Thu, Mar 8, 2018 at 5:07 AM, Jesper Dangaard Brouer
> <brouer@...hat.com> wrote:
> > This patchset works towards supporting different XDP RX-ring memory
> > allocators. As this will be needed by the AF_XDP zero-copy mode.
> >
> > The patchset uses mlx5 as the sample driver, which gets implemented
> > XDP_REDIRECT RX-mode, but not ndo_xdp_xmit (as this API is subject to
> > change thought the patchset).
> >
> > A new struct xdp_frame is introduced (modeled after cpumap xdp_pkt).
> > And both ndo_xdp_xmit and the new xdp_return_frame end-up using this.
> >
> > Support for a driver supplied allocator is implemented, and a
> > refurbished version of page_pool is the first return allocator type
> > introduced. This will be a integration point for AF_XDP zero-copy.
> >
> > The mlx5 driver evolve into using the page_pool, and see a performance
> > increase (with ndo_xdp_xmit out ixgbe driver) from 6Mpps to 12Mpps.
>
> So one question I would have is what effect does your patch set have
> on packets moving in the opposite direction, or ixgbe to ixgbe. My
> concern is how much of a hit we are taking in ixgbe to support a more
> generic solution.
The overhead for ixgbe is basically close to zero. Notice that the
MEM_TYPE_PAGE_SHARED type (like ixgbe) does not get allocated an ID,
thus it avoids doing any hash lookup. And basically just calls
page_frag_free().
if (mem->type == MEM_TYPE_PAGE_SHARED) {
page_frag_free(data);
return;
}
> > The patchset stop at the 15 patches limit, but more API changes are
> > planned. Specifically extending ndo_xdp_xmit and xdp_return_frame
> > APIs to support bulking. As this will address some known limits.
> >
> > V2: Updated according to Tariq's feedback
>
> I am assuming you are submitting this as an RFC aren't you? If not you
> have some general quality issues to work on. I saw c99 style "//"
> comments in patch 2, for example:
> +struct xdp_mem_info {
> + u32 type; /* enum mem_type, but known size type */
> + u32 id; // Needed later (to lookup struct xdp_rxq_info)
> +};
> +
>
> You should probably just go with the standard "/* */" style comments
> and be consistent.
This comment style, is basically a reminder to myself (and it gets
removed later). I guess, I should have removed it completely before I
submitted. Note, I've already done 3 versions of pre-RFC patchset
(offlist), so I hope this patchset can be soon be considered non-RFC
material.
> There are also "FIXME" comments in patch 6, either drop them entirely
> so you don't need to remove them in patch 15, or address the issue of
> explaining why you needed the call.
This were actually a question to Jason Wang... pointed it out explicitly.
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists