[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2d2154f4-c735-a9b3-7940-f8830fee6229@gmail.com>
Date: Wed, 25 Aug 2021 09:38:55 -0700
From: David Ahern <dsahern@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexander Duyck <alexander.duyck@...il.com>,
Russell King <linux@...linux.org.uk>,
Marcin Wojtas <mw@...ihalf.com>, linuxarm@...neuler.org,
Yisen Zhuang <yisen.zhuang@...wei.com>,
Salil Mehta <salil.mehta@...wei.com>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Jesper Dangaard Brouer <hawk@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
John Fastabend <john.fastabend@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>,
Matthew Wilcox <willy@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Fenghua Yu <fenghua.yu@...el.com>,
Roman Gushchin <guro@...com>, Peter Xu <peterx@...hat.com>,
"Tang, Feng" <feng.tang@...el.com>, Jason Gunthorpe <jgg@...pe.ca>,
mcroce@...rosoft.com, Hugh Dickins <hughd@...gle.com>,
Jonathan Lemon <jonathan.lemon@...il.com>,
Alexander Lobakin <alobakin@...me>,
Willem de Bruijn <willemb@...gle.com>,
wenxu <wenxu@...oud.cn>, Cong Wang <cong.wang@...edance.com>,
Kevin Hao <haokexin@...il.com>,
Aleksandr Nogikh <nogikh@...gle.com>,
Marco Elver <elver@...gle.com>, Yonghong Song <yhs@...com>,
kpsingh@...nel.org, Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
chenhao288@...ilicon.com,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
David Ahern <dsahern@...nel.org>, memxor@...il.com,
linux@...pel-privat.de, Antoine Tenart <atenart@...nel.org>,
Wei Wang <weiwan@...gle.com>, Taehee Yoo <ap420073@...il.com>,
Arnd Bergmann <arnd@...db.de>,
Mat Martineau <mathew.j.martineau@...ux.intel.com>,
aahringo@...hat.com, ceggers@...i.de, yangbo.lu@....com,
Florian Westphal <fw@...len.de>, xiangxia.m.yue@...il.com,
linmiaohe <linmiaohe@...wei.com>, Christoph Hellwig <hch@....de>
Subject: Re: [Linuxarm] Re: [PATCH RFC 0/7] add socket to netdev page frag
recycling support
On 8/25/21 9:32 AM, Eric Dumazet wrote:
> On Wed, Aug 25, 2021 at 9:29 AM David Ahern <dsahern@...il.com> wrote:
>>
>> On 8/23/21 8:04 AM, Eric Dumazet wrote:
>>>>
>>>>
>>>> It seems PAGE_ALLOC_COSTLY_ORDER is mostly related to pcp page, OOM, memory
>>>> compact and memory isolation, as the test system has a lot of memory installed
>>>> (about 500G, only 3-4G is used), so I used the below patch to test the max
>>>> possible performance improvement when making TCP frags twice bigger, and
>>>> the performance improvement went from about 30Gbit to 32Gbit for one thread
>>>> iperf tcp flow in IOMMU strict mode,
>>>
>>> This is encouraging, and means we can do much better.
>>>
>>> Even with SKB_FRAG_PAGE_ORDER set to 4, typical skbs will need 3 mappings
>>>
>>> 1) One for the headers (in skb->head)
>>> 2) Two page frags, because one TSO packet payload is not a nice power-of-two.
>>
>> interesting observation. I have noticed 17 with the ZC API. That might
>> explain the less than expected performance bump with iommu strict mode.
>
> Note that if application is using huge pages, things get better after
>
> commit 394fcd8a813456b3306c423ec4227ed874dfc08b
> Author: Eric Dumazet <edumazet@...gle.com>
> Date: Thu Aug 20 08:43:59 2020 -0700
>
> net: zerocopy: combine pages in zerocopy_sg_from_iter()
>
> Currently, tcp sendmsg(MSG_ZEROCOPY) is building skbs with order-0
> fragments.
> Compared to standard sendmsg(), these skbs usually contain up to
> 16 fragments
> on arches with 4KB page sizes, instead of two.
>
> This adds considerable costs on various ndo_start_xmit() handlers,
> especially when IOMMU is in the picture.
>
> As high performance applications are often using huge pages,
> we can try to combine adjacent pages belonging to same
> compound page.
>
> Tested on AMD Rome platform, with IOMMU, nominal single TCP flow speed
> is roughly doubled (~55Gbit -> ~100Gbit), when user application
> is using hugepages.
>
> For reference, nominal single TCP flow speed on this platform
> without MSG_ZEROCOPY is ~65Gbit.
>
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
> Cc: Willem de Bruijn <willemb@...gle.com>
> Signed-off-by: David S. Miller <davem@...emloft.net>
>
> Ideally the gup stuff should really directly deal with hugepages, so
> that we avoid
> all these crazy refcounting games on the per-huge-page central refcount.
>
thanks for the pointer. I need to revisit my past attempt to get iperf3
working with hugepages.
Powered by blists - more mailing lists