lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJDf9uzSdqLEBeTeGB1uAxvmruKfK5HbeZWp+Cdc+qggQ@mail.gmail.com>
Date:   Wed, 18 Aug 2021 10:57:06 +0200
From:   Eric Dumazet <edumazet@...gle.com>
To:     Yunsheng Lin <linyunsheng@...wei.com>
Cc:     David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Russell King <linux@...linux.org.uk>,
        Marcin Wojtas <mw@...ihalf.com>, linuxarm@...neuler.org,
        Yisen Zhuang <yisen.zhuang@...wei.com>,
        Salil Mehta <salil.mehta@...wei.com>,
        Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Will Deacon <will@...nel.org>,
        Matthew Wilcox <willy@...radead.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        Fenghua Yu <fenghua.yu@...el.com>,
        Roman Gushchin <guro@...com>, Peter Xu <peterx@...hat.com>,
        "Tang, Feng" <feng.tang@...el.com>, Jason Gunthorpe <jgg@...pe.ca>,
        mcroce@...rosoft.com, Hugh Dickins <hughd@...gle.com>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Alexander Lobakin <alobakin@...me>,
        Willem de Bruijn <willemb@...gle.com>,
        wenxu <wenxu@...oud.cn>, Cong Wang <cong.wang@...edance.com>,
        Kevin Hao <haokexin@...il.com>,
        Aleksandr Nogikh <nogikh@...gle.com>,
        Marco Elver <elver@...gle.com>, Yonghong Song <yhs@...com>,
        kpsingh@...nel.org, Andrii Nakryiko <andrii@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>,
        netdev <netdev@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        chenhao288@...ilicon.com,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>, memxor@...il.com,
        linux@...pel-privat.de, Antoine Tenart <atenart@...nel.org>,
        Wei Wang <weiwan@...gle.com>, Taehee Yoo <ap420073@...il.com>,
        Arnd Bergmann <arnd@...db.de>,
        Mat Martineau <mathew.j.martineau@...ux.intel.com>,
        aahringo@...hat.com, ceggers@...i.de, yangbo.lu@....com,
        Florian Westphal <fw@...len.de>, xiangxia.m.yue@...il.com,
        linmiaohe <linmiaohe@...wei.com>
Subject: Re: [PATCH RFC 0/7] add socket to netdev page frag recycling support

On Wed, Aug 18, 2021 at 5:33 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>
> This patchset adds the socket to netdev page frag recycling
> support based on the busy polling and page pool infrastructure.

I really do not see how this can scale to thousands of sockets.

tcp_mem[] defaults to ~ 9 % of physical memory.

If you now run tests with thousands of sockets, their skbs will
consume Gigabytes
of memory on typical servers, now backed by order-0 pages (instead of
current order-3 pages)
So IOMMU costs will actually be much bigger.

Are we planning to use Gigabyte sized page pools for NIC ?

Have you tried instead to make TCP frags twice bigger ?
This would require less IOMMU mappings.
(Note: This could require some mm help, since PAGE_ALLOC_COSTLY_ORDER
is currently 3, not 4)

diff --git a/net/core/sock.c b/net/core/sock.c
index a3eea6e0b30a7d43793f567ffa526092c03e3546..6b66b51b61be9f198f6f1c4a3d81b57fa327986a
100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2560,7 +2560,7 @@ static void sk_leave_memory_pressure(struct sock *sk)
        }
 }

-#define SKB_FRAG_PAGE_ORDER    get_order(32768)
+#define SKB_FRAG_PAGE_ORDER    get_order(65536)
 DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);

 /**



>
> The profermance improve from 30Gbit to 41Gbit for one thread iperf
> tcp flow, and the CPU usages decreases about 20% for four threads
> iperf flow with 100Gb line speed in IOMMU strict mode.
>
> The profermance improve about 2.5% for one thread iperf tcp flow
> in IOMMU passthrough mode.
>
> Yunsheng Lin (7):
>   page_pool: refactor the page pool to support multi alloc context
>   skbuff: add interface to manipulate frag count for tx recycling
>   net: add NAPI api to register and retrieve the page pool ptr
>   net: pfrag_pool: add pfrag pool support based on page pool
>   sock: support refilling pfrag from pfrag_pool
>   net: hns3: support tx recycling in the hns3 driver
>   sysctl_tcp_use_pfrag_pool
>
>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 +++++----
>  include/linux/netdevice.h                       |  9 +++
>  include/linux/skbuff.h                          | 43 +++++++++++-
>  include/net/netns/ipv4.h                        |  1 +
>  include/net/page_pool.h                         | 15 ++++
>  include/net/pfrag_pool.h                        | 24 +++++++
>  include/net/sock.h                              |  1 +
>  net/core/Makefile                               |  1 +
>  net/core/dev.c                                  | 34 ++++++++-
>  net/core/page_pool.c                            | 86 ++++++++++++-----------
>  net/core/pfrag_pool.c                           | 92 +++++++++++++++++++++++++
>  net/core/sock.c                                 | 12 ++++
>  net/ipv4/sysctl_net_ipv4.c                      |  7 ++
>  net/ipv4/tcp.c                                  | 34 ++++++---
>  14 files changed, 325 insertions(+), 66 deletions(-)
>  create mode 100644 include/net/pfrag_pool.h
>  create mode 100644 net/core/pfrag_pool.c
>
> --
> 2.7.4
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ