lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221110202147.hkhfvvb55djob43x@soft-dev3-1>
Date:   Thu, 10 Nov 2022 21:21:47 +0100
From:   Horatiu Vultur <horatiu.vultur@...rochip.com>
To:     Alexander Lobakin <alexandr.lobakin@...el.com>
CC:     Andrew Lunn <andrew@...n.ch>, <linux-kernel@...r.kernel.org>,
        <netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
        <davem@...emloft.net>, <edumazet@...gle.com>, <kuba@...nel.org>,
        <pabeni@...hat.com>, <ast@...nel.org>, <daniel@...earbox.net>,
        <hawk@...nel.org>, <john.fastabend@...il.com>,
        <linux@...linux.org.uk>, <UNGLinuxDriver@...rochip.com>
Subject: Re: [PATCH net-next v3 0/4] net: lan966x: Add xdp support

The 11/10/2022 17:21, Alexander Lobakin wrote:

Hi,

> 
> From: Andrew Lunn <andrew@...n.ch>
> Date: Thu, 10 Nov 2022 14:57:35 +0100
> 
> > > Nice stuff! I hear time to time that XDP is for 10G+ NICs only, but
> > > I'm not a fan of such, and this series proves once again XDP fits
> > > any hardware ^.^
> >
> > The Freescale FEC recently gained XDP support. Many variants of it are
> > Fast Ethernet only.
> >
> > What i found most interesting about that patchset was that the use of
> > the page_ppol API made the driver significantly faster for the general
> > case as well as XDP.
> 
> The driver didn't have any page recycling or page splitting logics,
> while Page Pool recycles even pages from skbs if
> skb_mark_for_recycle() is used, which is the case here. So it
> significantly reduced the number of new page allocations for Rx, if
> there still are any at all.
> Plus, Page Pool allocates pages by bulks (of 16 IIRC), not one by
> one, that reduces CPU overhead as well.

Just to make sure that everything is clear, those results that I have
shown in the cover letter are without any XDP programs on the
interfaces. Because I thought that is the correct comparison of the
results before and after all these changes.

Once I add an XDP program on the interface the performance drops. The
program will look for some ether types and always return XDP_PASS.

These are the results when I have such a XDP program on the interface:
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.01  sec   486 MBytes   408 Mbits/sec    0 sender
[  5]   0.00-10.00  sec   483 MBytes   405 Mbits/sec      receiver

> 
> >
> >      Andrew
> 
> Thanks,
> Olek

-- 
/Horatiu

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ