lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <PAXPR04MB91852D02CF96125E4EB3D2F1895F9@PAXPR04MB9185.eurprd04.prod.outlook.com>
Date:   Fri, 7 Oct 2022 19:18:18 +0000
From:   Shenwei Wang <shenwei.wang@....com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Jesper Dangaard Brouer <jbrouer@...hat.com>
CC:     Andrew Lunn <andrew@...n.ch>,
        "brouer@...hat.com" <brouer@...hat.com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "imx@...ts.linux.dev" <imx@...ts.linux.dev>,
        Magnus Karlsson <magnus.karlsson@...il.com>,
        Björn Töpel <bjorn@...nel.org>
Subject: RE: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support

Hi Jesper and Ilias,

The driver has a macro to configure the RX ring size. After testing
with the RX ring size, I found the strange result may has something
to do with this ring size.

I just tested with the application of xdpsock.
  -- Native here means running command of "xdpsock -i eth0"
  -- SKB-Mode means running command of "xdpsock -S -i eth0"

RX Ring Size       16       32        64       128
      Native      230K     227K      196K      160K
    SKB-Mode      207K     208K      203K      204K

It seems the smaller the ring size, the better the performance. This
is also a strange result to me.

The following is the iperf testing result.

RX Ring Size         16         64       128
iperf              300Mbps    830Mbps   933Mbps

Thanks,
Shenwei

> -----Original Message-----
> From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> Sent: Friday, October 7, 2022 3:08 AM
> To: Jesper Dangaard Brouer <jbrouer@...hat.com>
> Cc: Shenwei Wang <shenwei.wang@....com>; Andrew Lunn
> <andrew@...n.ch>; brouer@...hat.com; David S. Miller
> <davem@...emloft.net>; Eric Dumazet <edumazet@...gle.com>; Jakub
> Kicinski <kuba@...nel.org>; Paolo Abeni <pabeni@...hat.com>; Alexei
> Starovoitov <ast@...nel.org>; Daniel Borkmann <daniel@...earbox.net>;
> Jesper Dangaard Brouer <hawk@...nel.org>; John Fastabend
> <john.fastabend@...il.com>; netdev@...r.kernel.org; linux-
> kernel@...r.kernel.org; imx@...ts.linux.dev; Magnus Karlsson
> <magnus.karlsson@...il.com>; Björn Töpel <bjorn@...nel.org>
> Subject: Re: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP support
> 
> Caution: EXT Email
> 
> Hi Jesper,
> 
> On Thu, 6 Oct 2022 at 11:37, Jesper Dangaard Brouer <jbrouer@...hat.com>
> wrote:
> >
> >
> >
> > On 05/10/2022 14.40, Shenwei Wang wrote:
> > > Hi Jesper,
> > >
> > > Here is the summary of "xdp_rxq_info" testing.
> > >
> > >                skb_mark_for_recycle           page_pool_release_page
> > >
> > >               Native        SKB-Mode           Native          SKB-Mode
> > > XDP_DROP     460K           220K              460K             102K
> > > XDP_PASS     80K            113K              60K              62K
> > >
> >
> > It is very pleasing to see the *huge* performance benefit that
> > page_pool provide when recycling pages for SKBs (via skb_mark_for_recycle).
> > I did expect a performance boost, but not around a x2 performance boost.
> 
> Indeed that's a pleasant surprise.  Keep in mind that if we convert more driver
> we can also get rid of copy_break code sprinkled around in drivers.
> 
> Thanks
> /Ilias
> >
> > I guess this platform have a larger overhead for DMA-mapping and
> > page-allocation.
> >
> > IMHO it would be valuable to include this result as part of the patch
> > description when you post the XDP patch again.
> >
> > Only strange result is XDP_PASS 'Native' is slower that 'SKB-mode'. I
> > cannot explain why, as XDP_PASS essentially does nothing and just
> > follow normal driver code to netstack.
> >
> > Thanks a lot for doing these tests.
> > --Jesper
> >
> > > The following are the testing log.
> > >
> > > Thanks,
> > > Shenwei
> > >
> > > ### skb_mark_for_recycle solution ###
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       466,553     0
> > > XDP-RX CPU      total   466,553
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       226,272     0
> > > XDP-RX CPU      total   226,272
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       80,518      0
> > > XDP-RX CPU      total   80,518
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       113,681     0
> > > XDP-RX CPU      total   113,681
> > >
> > >
> > > ### page_pool_release_page solution ###
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       463,145     0
> > > XDP-RX CPU      total   463,145
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_DROP --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_DROP options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       104,443     0
> > > XDP-RX CPU      total   104,443
> > >
> > > ./xdp_rxq_info --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       60,539      0
> > > XDP-RX CPU      total   60,539
> > >
> > > ./xdp_rxq_info -S --dev eth0 --act XDP_PASS --read
> > >
> > > Running XDP on dev:eth0 (ifindex:2) action:XDP_PASS options:read
> > > XDP stats       CPU     pps         issue-pps
> > > XDP-RX CPU      0       62,566      0
> > > XDP-RX CPU      total   62,566
> > >
> > >> -----Original Message-----
> > >> From: Shenwei Wang
> > >> Sent: Tuesday, October 4, 2022 8:34 AM
> > >> To: Jesper Dangaard Brouer <jbrouer@...hat.com>; Andrew Lunn
> > >> <andrew@...n.ch>
> > >> Cc: brouer@...hat.com; David S. Miller <davem@...emloft.net>; Eric
> > >> Dumazet <edumazet@...gle.com>; Jakub Kicinski <kuba@...nel.org>;
> > >> Paolo Abeni <pabeni@...hat.com>; Alexei Starovoitov
> > >> <ast@...nel.org>; Daniel Borkmann <daniel@...earbox.net>; Jesper
> > >> Dangaard Brouer <hawk@...nel.org>; John Fastabend
> > >> <john.fastabend@...il.com>; netdev@...r.kernel.org; linux-
> > >> kernel@...r.kernel.org; imx@...ts.linux.dev; Magnus Karlsson
> > >> <magnus.karlsson@...il.com>; Björn Töpel <bjorn@...nel.org>; Ilias
> > >> Apalodimas <ilias.apalodimas@...aro.org>
> > >> Subject: RE: [EXT] Re: [PATCH 1/1] net: fec: add initial XDP
> > >> support
> > >>
> > >>
> > >>
> > >>> -----Original Message-----
> > >>> From: Shenwei Wang
> > >>> Sent: Tuesday, October 4, 2022 8:13 AM
> > >>> To: Jesper Dangaard Brouer <jbrouer@...hat.com>; Andrew Lunn
> > >> ...
> > >>> I haven't tested xdp_rxq_info yet, and will have a try sometime later today.
> > >>> However, for the XDP_DROP test, I did try xdp2 test case, and the
> > >>> testing result looks reasonable. The performance of Native mode is
> > >>> much higher than skb- mode.
> > >>>
> > >>> # xdp2 eth0
> > >>>   proto 0:     475362 pkt/s
> > >>>
> > >>> # xdp2 -S eth0             (page_pool_release_page solution)
> > >>>   proto 17:     71999 pkt/s
> > >>>
> > >>> # xdp2 -S eth0             (skb_mark_for_recycle solution)
> > >>>   proto 17:     72228 pkt/s
> > >>>
> > >>
> > >> Correction for xdp2 -S eth0  (skb_mark_for_recycle solution)
> > >> proto 0:          0 pkt/s
> > >> proto 17:     122473 pkt/s
> > >>
> > >> Thanks,
> > >> Shenwei
> > >
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ