lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Sep 2022 17:28:43 +0200
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     Shenwei Wang <shenwei.wang@....com>, Andrew Lunn <andrew@...n.ch>
Cc:     brouer@...hat.com, Joakim Zhang <qiangqing.zhang@....com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "imx@...ts.linux.dev" <imx@...ts.linux.dev>
Subject: Re: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support



On 29/09/2022 15.26, Shenwei Wang wrote:
> 
>> From: Andrew Lunn <andrew@...n.ch>
>> Sent: Thursday, September 29, 2022 8:23 AM
[...]
>>
>>> I actually did some compare testing regarding the page pool for normal
>>> traffic.  So far I don't see significant improvement in the current
>>> implementation. The performance for large packets improves a little,
>>> and the performance for small packets get a little worse.
>>
>> What hardware was this for? imx51? imx6? imx7 Vybrid? These all use the FEC.
> 
> I tested on imx8qxp platform. It is ARM64.

On mvneta driver/platform we saw huge speedup replacing:

   page_pool_release_page(rxq->page_pool, page);
with
   skb_mark_for_recycle(skb);

As I mentioned: Today page_pool have SKB recycle support (you might have 
looked at drivers that didn't utilize this yet), thus you don't need to 
release the page (page_pool_release_page) here.  Instead you could 
simply mark the SKB for recycling, unless driver does some page refcnt 
tricks I didn't notice.

On the mvneta driver/platform the DMA unmap (in page_pool_release_page) 
was very expensive. This imx8qxp platform might have faster DMA unmap in 
case is it cache-coherent.

I would be very interested in knowing if skb_mark_for_recycle() helps on 
this platform, for normal network stack performance.

>> By small packets, do you mean those under the copybreak limit?
>>
>> Please provide some benchmark numbers with your next patchset.
> 
> Yes, the packet size is 64 bytes and it is under the copybreak limit.
> As the impact is not significant, I would prefer to remove the
> copybreak  logic.

+1 to removing this logic if possible, due to maintenance cost.

--Jesper

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ