[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0d52a83-a027-1872-1321-9bf7884bcffa@intel.com>
Date: Wed, 1 Feb 2023 12:05:20 +0100
From: Alexander Lobakin <alexandr.lobakin@...el.com>
To: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
CC: <intel-wired-lan@...ts.osuosl.org>, <netdev@...r.kernel.org>,
<bpf@...r.kernel.org>, <anthony.l.nguyen@...el.com>,
<magnus.karlsson@...el.com>, <tirthendu.sarkar@...el.com>
Subject: Re: [PATCH bpf-next 00/13] ice: add XDP mbuf support
From: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
Date: Tue, 31 Jan 2023 21:44:53 +0100
> Hi there,
>
> although this work started as an effort to add multi-buffer XDP support
> to ice driver, as usual it turned out that some other side stuff needed
> to be addressed, so let me give you an overview.
>
> First patch adjusts legacy-rx in a way that it will be possible to refer
> to skb_shared_info being at the end of the buffer when gathering up
> frame fragments within xdp_buff.
>
> Then, patches 2-9 prepare ice driver in a way that actual multi-buffer
> patches will be easier to swallow.
>
> 10 and 11 are the meat. What is worth mentioning is that this set
> actually *fixes* things as patch 11 removes the logic based on
> next_dd/rs and we previously stepped away from this for ice_xmit_zc().
> Currently, AF_XDP ZC XDP_TX workload is off as there are two cleaning
> sides that can be triggered and two of them work on different internal
> logic. This set unifies that and allows us to improve the performance by
> 2x with a trick on the last (13) patch.
>
> 12th is a simple cleanup of no longer fields from Tx ring.
>
> I might be wrong but I have not seen anyone reporting performance impact
> among patches that add XDP multi-buffer support to a particular driver.
> Numbers below were gathered via xdp_rxq_info and xdp_redirect_map on
> 1500 MTU:
>
> XDP_DROP +1%
> XDP_PASS -1,2%
> XDP_TX -0,5%
> XDP_REDIRECT -3,3%
>
> Cherry on top, which is not directly related to mbuf support (last
> patch):
> XDP_TX ZC +126%
>
> Target the we agreed on was to not degrade performance for any action by
> anything that would be over 5%, so our goal was met. Basically this set
> keeps the performance where it was. Redirect is slower due to more
> frequent tail bumps.
>
> Thanks!
You forgot to add my
Reviewed-by: Alexander Lobakin <alexandr.lobakin@...el.com>
for the whole series :D
>
>
> Maciej Fijalkowski (13):
> ice: prepare legacy-rx for upcoming XDP multi-buffer support
> ice: add xdp_buff to ice_rx_ring struct
> ice: store page count inside ice_rx_buf
> ice: pull out next_to_clean bump out of ice_put_rx_buf()
> ice: inline eop check
> ice: centrallize Rx buffer recycling
> ice: use ice_max_xdp_frame_size() in ice_xdp_setup_prog()
> ice: do not call ice_finalize_xdp_rx() unnecessarily
> ice: use xdp->frame_sz instead of recalculating truesize
> ice: add support for XDP multi-buffer on Rx side
> ice: add support for XDP multi-buffer on Tx side
> ice: remove next_{dd,rs} fields from ice_tx_ring
> ice: xsk: do not convert to buff to frame for XDP_TX
>
> drivers/net/ethernet/intel/ice/ice_base.c | 21 +-
> drivers/net/ethernet/intel/ice/ice_ethtool.c | 4 +-
> drivers/net/ethernet/intel/ice/ice_lib.c | 8 +-
> drivers/net/ethernet/intel/ice/ice_main.c | 47 +-
> drivers/net/ethernet/intel/ice/ice_txrx.c | 408 ++++++++++--------
> drivers/net/ethernet/intel/ice/ice_txrx.h | 54 ++-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 236 ++++++----
> drivers/net/ethernet/intel/ice/ice_txrx_lib.h | 75 +++-
> drivers/net/ethernet/intel/ice/ice_xsk.c | 192 +++++----
> 9 files changed, 629 insertions(+), 416 deletions(-)
Thanks,
Olek
Powered by blists - more mailing lists