lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 3 Sep 2021 06:50:07 +0000
From:   "Kuruvinakunnel, George" <george.kuruvinakunnel@...el.com>
To:     "Fijalkowski, Maciej" <maciej.fijalkowski@...el.com>,
        "intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC:     "joamaki@...il.com" <joamaki@...il.com>,
        "Lobakin, Alexandr" <alexandr.lobakin@...el.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "toke@...hat.com" <toke@...hat.com>,
        "bjorn@...nel.org" <bjorn@...nel.org>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "Karlsson, Magnus" <magnus.karlsson@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH v7 intel-next 7/9] ice: optimize XDP_TX
 workloads

> From: Intel-wired-lan <intel-wired-lan-bounces@...osl.org> On Behalf Of Maciej
> Fijalkowski
> Sent: Thursday, August 19, 2021 5:30 PM
> To: intel-wired-lan@...ts.osuosl.org
> Cc: joamaki@...il.com; Lobakin, Alexandr <alexandr.lobakin@...el.com>;
> netdev@...r.kernel.org; toke@...hat.com; bjorn@...nel.org; kuba@...nel.org;
> bpf@...r.kernel.org; davem@...emloft.net; Karlsson, Magnus
> <magnus.karlsson@...el.com>
> Subject: [Intel-wired-lan] [PATCH v7 intel-next 7/9] ice: optimize XDP_TX workloads
> 
> Optimize Tx descriptor cleaning for XDP. Current approach doesn't really scale
> and chokes when multiple flows are handled.
> 
> Introduce two ring fields, @next_dd and @next_rs that will keep track of descriptor
> that should be looked at when the need for cleaning arise and the descriptor that
> should have the RS bit set, respectively.
> 
> Note that at this point the threshold is a constant (32), but it is something that we
> could make configurable.
> 
> First thing is to get away from setting RS bit on each descriptor. Let's do this only
> once NTU is higher than the currently @next_rs value. In such case, grab the
> tx_desc[next_rs], set the RS bit in descriptor and advance the @next_rs by a 32.
> 
> Second thing is to clean the Tx ring only when there are less than 32 free entries.
> For that case, look up the tx_desc[next_dd] for a DD bit.
> This bit is written back by HW to let the driver know that xmit was successful. It will
> happen only for those descriptors that had RS bit set. Clean only 32 descriptors
> and advance the DD bit.
> 
> Actual cleaning routine is moved from ice_napi_poll() down to the
> ice_xmit_xdp_ring(). It is safe to do so as XDP ring will not get any SKBs in there
> that would rely on interrupts for the cleaning. Nice side effect is that for rare case
> of Tx fallback path (that next patch is going to introduce) we don't have to trigger
> the SW irq to clean the ring.
> 
> With those two concepts, ring is kept at being almost full, but it is guaranteed that
> driver will be able to produce Tx descriptors.
> 
> This approach seems to work out well even though the Tx descriptors are
> produced in one-by-one manner. Test was conducted with the ice HW bombarded
> with packets from HW generator, configured to generate 30 flows.
> 
> Xdp2 sample yields the following results:
> <snip>
> proto 17:   79973066 pkt/s
> proto 17:   80018911 pkt/s
> proto 17:   80004654 pkt/s
> proto 17:   79992395 pkt/s
> proto 17:   79975162 pkt/s
> proto 17:   79955054 pkt/s
> proto 17:   79869168 pkt/s
> proto 17:   79823947 pkt/s
> proto 17:   79636971 pkt/s
> </snip>
> 
> As that sample reports the Rx'ed frames, let's look at sar output.
> It says that what we Rx'ed we do actually Tx, no noticeable drops.
> Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s
> rxmcst/s   %ifutil
> Average:       ens4f1 79842324.00 79842310.40 4678261.17 4678260.38 0.00
> 0.00      0.00     38.32
> 
> with tx_busy staying calm.
> 
> When compared to a state before:
> Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s txcmp/s
> rxmcst/s   %ifutil
> Average:       ens4f1 90919711.60 42233822.60 5327326.85 2474638.04 0.00
> 0.00      0.00     43.64
> 
> it can be observed that the amount of txpck/s is almost doubled, meaning that the
> performance is improved by around 90%. All of this due to the drops in the driver,
> previously the tx_busy stat was bumped at a 7mpps rate.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@...el.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c     |  9 ++-
>  drivers/net/ethernet/intel/ice/ice_txrx.c     | 21 +++---
>  drivers/net/ethernet/intel/ice/ice_txrx.h     | 10 ++-
>  drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 73 ++++++++++++++++---
>  4 files changed, 88 insertions(+), 25 deletions(-)
> 

Tested-by: George Kuruvinakunnel <george.kuruvinakunnel@...el.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ