[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2cf4b672-d7dc-db3d-ce90-15b4e91c4005@huawei.com>
Date: Wed, 18 Aug 2021 17:36:06 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Eric Dumazet <edumazet@...gle.com>
CC: David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Russell King <linux@...linux.org.uk>,
Marcin Wojtas <mw@...ihalf.com>, <linuxarm@...neuler.org>,
Yisen Zhuang <yisen.zhuang@...wei.com>,
Salil Mehta <salil.mehta@...wei.com>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Alexei Starovoitov <ast@...nel.org>,
"Daniel Borkmann" <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Fenghua Yu <fenghua.yu@...el.com>,
Roman Gushchin <guro@...com>, Peter Xu <peterx@...hat.com>,
"Tang, Feng" <feng.tang@...el.com>, Jason Gunthorpe <jgg@...pe.ca>,
<mcroce@...rosoft.com>, Hugh Dickins <hughd@...gle.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexander Lobakin <alobakin@...me>,
Willem de Bruijn <willemb@...gle.com>,
wenxu <wenxu@...oud.cn>, Cong Wang <cong.wang@...edance.com>,
Kevin Hao <haokexin@...il.com>,
Aleksandr Nogikh <nogikh@...gle.com>,
Marco Elver <elver@...gle.com>, Yonghong Song <yhs@...com>,
<kpsingh@...nel.org>, "Andrii Nakryiko" <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
<chenhao288@...ilicon.com>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>, <memxor@...il.com>,
<linux@...pel-privat.de>, Antoine Tenart <atenart@...nel.org>,
Wei Wang <weiwan@...gle.com>, Taehee Yoo <ap420073@...il.com>,
Arnd Bergmann <arnd@...db.de>,
Mat Martineau <mathew.j.martineau@...ux.intel.com>,
<aahringo@...hat.com>, <ceggers@...i.de>, <yangbo.lu@....com>,
"Florian Westphal" <fw@...len.de>, <xiangxia.m.yue@...il.com>,
linmiaohe <linmiaohe@...wei.com>, <hch@....de>
Subject: Re: [PATCH RFC 0/7] add socket to netdev page frag recycling support
On 2021/8/18 16:57, Eric Dumazet wrote:
> On Wed, Aug 18, 2021 at 5:33 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>
>> This patchset adds the socket to netdev page frag recycling
>> support based on the busy polling and page pool infrastructure.
>
> I really do not see how this can scale to thousands of sockets.
>
> tcp_mem[] defaults to ~ 9 % of physical memory.
>
> If you now run tests with thousands of sockets, their skbs will
> consume Gigabytes
> of memory on typical servers, now backed by order-0 pages (instead of
> current order-3 pages)
> So IOMMU costs will actually be much bigger.
As the page allocator support bulk allocating now, see:
https://elixir.bootlin.com/linux/latest/source/net/core/page_pool.c#L252
if the DMA also support batch mapping/unmapping, maybe having a
small-sized page pool for thousands of sockets may not be a problem?
Christoph Hellwig mentioned the batch DMA operation support in below
thread:
https://www.spinics.net/lists/netdev/msg666715.html
if the batched DMA operation is supported, maybe having the
page pool is mainly benefit the case of small number of socket?
>
> Are we planning to use Gigabyte sized page pools for NIC ?
>
> Have you tried instead to make TCP frags twice bigger ?
Not yet.
> This would require less IOMMU mappings.
> (Note: This could require some mm help, since PAGE_ALLOC_COSTLY_ORDER
> is currently 3, not 4)
I am not familiar with mm yet, but I will take a look about that:)
>
> diff --git a/net/core/sock.c b/net/core/sock.c
> index a3eea6e0b30a7d43793f567ffa526092c03e3546..6b66b51b61be9f198f6f1c4a3d81b57fa327986a
> 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -2560,7 +2560,7 @@ static void sk_leave_memory_pressure(struct sock *sk)
> }
> }
>
> -#define SKB_FRAG_PAGE_ORDER get_order(32768)
> +#define SKB_FRAG_PAGE_ORDER get_order(65536)
> DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);
>
> /**
>
>
>
>>
Powered by blists - more mailing lists