lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGnkfhzrSaVe3zJ+0rriqqELha554Gmv-zskrJbiBjhHdUG2uQ@mail.gmail.com>
Date:   Tue, 24 Dec 2019 14:34:07 +0100
From:   Matteo Croce <mcroce@...hat.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Lorenzo Bianconi <lorenzo@...nel.org>,
        Maxime Chevallier <maxime.chevallier@...tlin.com>,
        Antoine Tenart <antoine.tenart@...tlin.com>,
        Luka Perkov <luka.perkov@...tura.hr>,
        Tomislav Tomasic <tomislav.tomasic@...tura.hr>,
        Marcin Wojtas <mw@...ihalf.com>,
        Stefan Chulski <stefanc@...vell.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        Nadav Haklai <nadavh@...vell.com>
Subject: Re: [RFC net-next 0/2] mvpp2: page_pool support

On Tue, Dec 24, 2019 at 10:52 AM Ilias Apalodimas
<ilias.apalodimas@...aro.org> wrote:
>
> On Tue, Dec 24, 2019 at 02:01:01AM +0100, Matteo Croce wrote:
> > This patches change the memory allocator of mvpp2 from the frag allocator to
> > the page_pool API. This change is needed to add later XDP support to mvpp2.
> >
> > The reason I send it as RFC is that with this changeset, mvpp2 performs much
> > more slower. This is the tc drop rate measured with a single flow:
> >
> > stock net-next with frag allocator:
> > rx: 900.7 Mbps 1877 Kpps
> >
> > this patchset with page_pool:
> > rx: 423.5 Mbps 882.3 Kpps
> >
> > This is the perf top when receiving traffic:
> >
> >   27.68%  [kernel]            [k] __page_pool_clean_page
>
> This seems extremly high on the list.
>
> >    9.79%  [kernel]            [k] get_page_from_freelist
> >    7.18%  [kernel]            [k] free_unref_page
> >    4.64%  [kernel]            [k] build_skb
> >    4.63%  [kernel]            [k] __netif_receive_skb_core
> >    3.83%  [mvpp2]             [k] mvpp2_poll
> >    3.64%  [kernel]            [k] eth_type_trans
> >    3.61%  [kernel]            [k] kmem_cache_free
> >    3.03%  [kernel]            [k] kmem_cache_alloc
> >    2.76%  [kernel]            [k] dev_gro_receive
> >    2.69%  [mvpp2]             [k] mvpp2_bm_pool_put
> >    2.68%  [kernel]            [k] page_frag_free
> >    1.83%  [kernel]            [k] inet_gro_receive
> >    1.74%  [kernel]            [k] page_pool_alloc_pages
> >    1.70%  [kernel]            [k] __build_skb
> >    1.47%  [kernel]            [k] __alloc_pages_nodemask
> >    1.36%  [mvpp2]             [k] mvpp2_buf_alloc.isra.0
> >    1.29%  [kernel]            [k] tcf_action_exec
> >
> > I tried Ilias patches for page_pool recycling, I get an improvement
> > to ~1100, but I'm still far than the original allocator.
>
> Can you post the recycling perf for comparison?
>

  12.00%  [kernel]                  [k] get_page_from_freelist
   9.25%  [kernel]                  [k] free_unref_page
   6.83%  [kernel]                  [k] eth_type_trans
   5.33%  [kernel]                  [k] __netif_receive_skb_core
   4.96%  [mvpp2]                   [k] mvpp2_poll
   4.64%  [kernel]                  [k] kmem_cache_free
   4.06%  [kernel]                  [k] __xdp_return
   3.60%  [kernel]                  [k] kmem_cache_alloc
   3.31%  [kernel]                  [k] dev_gro_receive
   3.29%  [kernel]                  [k] __page_pool_clean_page
   3.25%  [mvpp2]                   [k] mvpp2_bm_pool_put
   2.73%  [kernel]                  [k] __page_pool_put_page
   2.33%  [kernel]                  [k] __alloc_pages_nodemask
   2.33%  [kernel]                  [k] inet_gro_receive
   2.05%  [kernel]                  [k] __build_skb
   1.95%  [kernel]                  [k] build_skb
   1.89%  [cls_matchall]            [k] mall_classify
   1.83%  [kernel]                  [k] page_pool_alloc_pages
   1.80%  [kernel]                  [k] tcf_action_exec
   1.70%  [mvpp2]                   [k] mvpp2_buf_alloc.isra.0
   1.63%  [kernel]                  [k] free_unref_page_prepare.part.0
   1.45%  [kernel]                  [k] page_pool_return_skb_page
   1.42%  [act_gact]                [k] tcf_gact_act
   1.16%  [kernel]                  [k] netif_receive_skb_list_internal
   1.08%  [kernel]                  [k] kfree_skb
   1.07%  [kernel]                  [k] skb_release_data


> >
> > Any idea on why I get such bad numbers?
>
> Nop but it's indeed strange
>
> >
> > Another reason to send it as RFC is that I'm not fully convinced on how to
> > use the page_pool given the HW limitation of the BM.
>
> I'll have a look right after holidays
>

Thanks

> >
> > The driver currently uses, for every CPU, a page_pool for short packets and
> > another for long ones. The driver also has 4 rx queue per port, so every
> > RXQ #1 will share the short and long page pools of CPU #1.
> >
>
> I am not sure i am following the hardware config here
>

Never mind, it's quite a mess, I needed a lot of time to get it :)

The HW put the packets in different buffer pools depending on the size:
short: 64..128
long: 128..1664
jumbo: 1664..9856

Let's skip the jumbo buffer for now and assume we have 4 CPU, the
driver allocates 4 short and 4 long buffers.
Each port has 4 RX queues, and each one uses a short and a long buffer.
With the page_pool api, we have 8 struct page_pool, 4 for the short
and 4 for the long buffers.


> > This means that for every RX queue I call xdp_rxq_info_reg_mem_model() twice,
> > on two different page_pool, can this be a problem?
> >
> > As usual, ideas are welcome.
> >
> > Matteo Croce (2):
> >   mvpp2: use page_pool allocator
> >   mvpp2: memory accounting
> >
> >  drivers/net/ethernet/marvell/Kconfig          |   1 +
> >  drivers/net/ethernet/marvell/mvpp2/mvpp2.h    |   7 +
> >  .../net/ethernet/marvell/mvpp2/mvpp2_main.c   | 142 +++++++++++++++---
> >  3 files changed, 125 insertions(+), 25 deletions(-)
> >
> > --
> > 2.24.1
> >
> Cheers
> /Ilias
>

Bye,
-- 
Matteo Croce
per aspera ad upstream

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ