lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Dec 2019 15:04:45 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Matteo Croce <mcroce@...hat.com>
Cc:     Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Lorenzo Bianconi <lorenzo@...nel.org>,
        Maxime Chevallier <maxime.chevallier@...tlin.com>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        Luka Perkov <luka.perkov@...tura.hr>,
        Tomislav Tomasic <tomislav.tomasic@...tura.hr>,
        Marcin Wojtas <mw@...ihalf.com>,
        Stefan Chulski <stefanc@...vell.com>,
        Nadav Haklai <nadavh@...vell.com>, brouer@...hat.com
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support

On Tue, 24 Dec 2019 14:34:07 +0100
Matteo Croce <mcroce@...hat.com> wrote:

> On Tue, Dec 24, 2019 at 10:52 AM Ilias Apalodimas
> <ilias.apalodimas@...aro.org> wrote:
> >
> > On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:  
> > > This patches change the memory allocator of mvpp2 from the frag allocator to
> > > the page_pool API. This change is needed to add later XDP support to mvpp2.
> > >
> > > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > > more slower. This is the tc drop rate measured with a single flow:
> > >
> > > stock net-next with frag allocator:
> > > rx: 900.7 Mbps 1877 Kpps
> > >
> > > this patchset with page_pool:
> > > rx: 423.5 Mbps 882.3 Kpps
> > >
> > > This is the perf top when receiving traffic:
> > >
> > >   27.68%  [kernel]            [k] __page_pool_clean_page  
> >
> > This seems extremly high on the list.
> >  
> > >    9.79%  [kernel]            [k] get_page_from_freelist
> > >    7.18%  [kernel]            [k] free_unref_page
> > >    4.64%  [kernel]            [k] build_skb
> > >    4.63%  [kernel]            [k] __netif_receive_skb_core
> > >    3.83%  [mvpp2]             [k] mvpp2_poll
> > >    3.64%  [kernel]            [k] eth_type_trans
> > >    3.61%  [kernel]            [k] kmem_cache_free
> > >    3.03%  [kernel]            [k] kmem_cache_alloc
> > >    2.76%  [kernel]            [k] dev_gro_receive
> > >    2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
> > >    2.68%  [kernel]            [k] page_frag_free
> > >    1.83%  [kernel]            [k] inet_gro_receive
> > >    1.74%  [kernel]            [k] page_pool_alloc_pages
> > >    1.70%  [kernel]            [k] __build_skb
> > >    1.47%  [kernel]            [k] __alloc_pages_nodemask
> > >    1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
> > >    1.29%  [kernel]            [k] tcf_action_exec
> > >
> > > I tried Ilias patches for page_pool recycling, I get an improvement
> > > to ~1100, but I'm still far than the original allocator.  
> >
> > Can you post the recycling perf for comparison?
> >  
> 
>   12.00%  [kernel]                  [k] get_page_from_freelist
>    9.25%  [kernel]                  [k] free_unref_page

Hmm, this indicate pages are not getting recycled.

>    6.83%  [kernel]                  [k] eth_type_trans
>    5.33%  [kernel]                  [k] __netif_receive_skb_core
>    4.96%  [mvpp2]                   [k] mvpp2_poll
>    4.64%  [kernel]                  [k] kmem_cache_free
>    4.06%  [kernel]                  [k] __xdp_return

You do invoke __xdp_return() code, but it might find that the page
cannot be recycled...

>    3.60%  [kernel]                  [k] kmem_cache_alloc
>    3.31%  [kernel]                  [k] dev_gro_receive
>    3.29%  [kernel]                  [k] __page_pool_clean_page
>    3.25%  [mvpp2]                   [k] mvpp2_bm_pool_put
>    2.73%  [kernel]                  [k] __page_pool_put_page
>    2.33%  [kernel]                  [k] __alloc_pages_nodemask
>    2.33%  [kernel]                  [k] inet_gro_receive
>    2.05%  [kernel]                  [k] __build_skb
>    1.95%  [kernel]                  [k] build_skb
>    1.89%  [cls_matchall]            [k] mall_classify
>    1.83%  [kernel]                  [k] page_pool_alloc_pages
>    1.80%  [kernel]                  [k] tcf_action_exec
>    1.70%  [mvpp2]                   [k] mvpp2_buf_alloc.isra.0
>    1.63%  [kernel]                  [k] free_unref_page_prepare.part.0
>    1.45%  [kernel]                  [k] page_pool_return_skb_page
>    1.42%  [act_gact]                [k] tcf_gact_act
>    1.16%  [kernel]                  [k] netif_receive_skb_list_internal
>    1.08%  [kernel]                  [k] kfree_skb
>    1.07%  [kernel]                  [k] skb_release_data

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ