[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cb14140-5632-8b07-9088-2adb7dfedc0b@redhat.com>
Date: Fri, 16 Mar 2018 17:04:17 +0800
From: Jason Wang <jasowang@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org,
BjörnTöpel <bjorn.topel@...el.com>,
magnus.karlsson@...el.com
Cc: eugenia@...lanox.com, John Fastabend <john.fastabend@...il.com>,
Eran Ben Elisha <eranbe@...lanox.com>,
Saeed Mahameed <saeedm@...lanox.com>, galp@...lanox.com,
Daniel Borkmann <borkmann@...earbox.net>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tariq Toukan <tariqt@...lanox.com>
Subject: Re: [bpf-next V3 PATCH 00/15] XDP redirect memory return API
On 2018年03月10日 04:55, Jesper Dangaard Brouer wrote:
> This patchset works towards supporting different XDP RX-ring memory
> allocators. As this will be needed by the AF_XDP zero-copy mode.
>
> The patchset uses mlx5 as the sample driver, which gets implemented
> XDP_REDIRECT RX-mode, but not ndo_xdp_xmit (as this API is subject to
> change thought the patchset).
>
> A new struct xdp_frame is introduced (modeled after cpumap xdp_pkt).
> And both ndo_xdp_xmit and the new xdp_return_frame end-up using this.
>
> Support for a driver supplied allocator is implemented, and a
> refurbished version of page_pool is the first return allocator type
> introduced. This will be a integration point for AF_XDP zero-copy.
>
> The mlx5 driver evolve into using the page_pool, and see a performance
> increase (with ndo_xdp_xmit out ixgbe driver) from 6Mpps to 12Mpps.
>
>
> The patchset stop at the 15 patches limit, but more API changes are
> planned. Specifically extending ndo_xdp_xmit and xdp_return_frame
> APIs to support bulking. As this will address some known limits.
>
> V2: Updated according to Tariq's feedback
> V3: Feedback from Jason Wang and Alex Duyck
Hi Jesper:
Looks like the series forget to register memory model for tun and
virtio-net.
Thanks
Powered by blists - more mailing lists