[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ca9ab650-3f77-509c-7a29-6d7dd775b6d1@huawei.com>
Date: Tue, 26 Mar 2024 20:47:18 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Mina Almasry <almasrymina@...gle.com>, YiFei Zhu <zhuyifei@...gle.com>
CC: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-doc@...r.kernel.org>, <linux-alpha@...r.kernel.org>,
<linux-mips@...r.kernel.org>, <linux-parisc@...r.kernel.org>,
<sparclinux@...r.kernel.org>, <linux-trace-kernel@...r.kernel.org>,
<linux-arch@...r.kernel.org>, <bpf@...r.kernel.org>,
<linux-kselftest@...r.kernel.org>, <linux-media@...r.kernel.org>,
<dri-devel@...ts.freedesktop.org>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo
Abeni <pabeni@...hat.com>, Jonathan Corbet <corbet@....net>, Richard
Henderson <richard.henderson@...aro.org>, Ivan Kokshaysky
<ink@...assic.park.msu.ru>, Matt Turner <mattst88@...il.com>, Thomas
Bogendoerfer <tsbogend@...ha.franken.de>, "James E.J. Bottomley"
<James.Bottomley@...senpartnership.com>, Helge Deller <deller@....de>,
Andreas Larsson <andreas@...sler.com>, Jesper Dangaard Brouer
<hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, Steven
Rostedt <rostedt@...dmis.org>, Masami Hiramatsu <mhiramat@...nel.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Arnd Bergmann
<arnd@...db.de>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
<martin.lau@...ux.dev>, Eduard Zingerman <eddyz87@...il.com>, Song Liu
<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, John Fastabend
<john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>, Stanislav Fomichev
<sdf@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
David Ahern <dsahern@...nel.org>, Willem de Bruijn
<willemdebruijn.kernel@...il.com>, Shuah Khan <shuah@...nel.org>, Sumit
Semwal <sumit.semwal@...aro.org>, Christian König
<christian.koenig@....com>, Pavel Begunkov <asml.silence@...il.com>, David
Wei <dw@...idwei.uk>, Jason Gunthorpe <jgg@...pe.ca>, Shailend Chand
<shailend@...gle.com>, Harshitha Ramamurthy <hramamurthy@...gle.com>, Shakeel
Butt <shakeelb@...gle.com>, Jeroen de Borst <jeroendb@...gle.com>, Praveen
Kaligineedi <pkaligineedi@...gle.com>
Subject: Re: [RFC PATCH net-next v6 00/15] Device Memory TCP
On 2024/3/26 8:28, Mina Almasry wrote:
> On Tue, Mar 5, 2024 at 11:38 AM Mina Almasry <almasrymina@...gle.com> wrote:
>>
>> On Tue, Mar 5, 2024 at 4:54 AM Yunsheng Lin <linyunsheng@...wei.com> wrote:
>>>
>>> On 2024/3/5 10:01, Mina Almasry wrote:
>>>
>>> ...
>>>
>>>>
>>>> Perf - page-pool benchmark:
>>>> ---------------------------
>>>>
>>>> bench_page_pool_simple.ko tests with and without these changes:
>>>> https://pastebin.com/raw/ncHDwAbn
>>>>
>>>> AFAIK the number that really matters in the perf tests is the
>>>> 'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8
>>>> cycles without the changes but there is some 1 cycle noise in some
>>>> results.
>>>>
>>>> With the patches this regresses to 9 cycles with the changes but there
>>>> is 1 cycle noise occasionally running this test repeatedly.
>>>>
>>>> Lastly I tried disable the static_branch_unlikely() in
>>>> netmem_is_net_iov() check. To my surprise disabling the
>>>> static_branch_unlikely() check reduces the fast path back to 8 cycles,
>>>> but the 1 cycle noise remains.
>>>>
>>>
>>> The last sentence seems to be suggesting the above 1 ns regresses is caused
>>> by the static_branch_unlikely() checking?
>>
>> Note it's not a 1ns regression, it's looks like maybe a 1 cycle
>> regression (slightly less than 1ns if I'm reading the output of the
>> test correctly):
>>
>> # clean net-next
>> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc)
>> 2.993 ns (step:0)
>>
>> # with patches
>> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 9 cycles(tsc)
>> 3.679 ns (step:0)
>>
>> # with patches and with diff that disables static branching:
>> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc)
>> 3.248 ns (step:0)
>>
>> I do see noise in the test results between run and run, and any
>> regression (if any) is slightly obfuscated by the noise, so it's a bit
>> hard to make confident statements. So far it looks like a ~0.25ns
>> regression without static branch and about ~0.65ns with static branch.
>>
>> Honestly when I saw all 3 results were within some noise I did not
>> investigate more, but if this looks concerning to you I can dig
>> further. I likely need to gather a few test runs to filter out the
>> noise and maybe investigate the assembly my compiler is generating to
>> maybe narrow down what changes there.
>>
>
> I did some more investigation here to gather more data to filter out
> the noise, and recorded the summary here:
>
> https://pastebin.com/raw/v5dYRg8L
>
> Long story short, the page_pool benchmark results are consistent with
> some outlier noise results that I'm discounting here. Currently
> page_pool fast path is at 8 cycles
>
> [ 2115.724510] time_bench: Type:tasklet_page_pool01_fast_path Per
> elem: 8 cycles(tsc) 3.187 ns (step:0) - (measurement period
> time:0.031870585 sec time_interval:31870585) - (invoke count:10000000
> tsc_interval:86043192)
>
> and with this patch series it degrades to 10 cycles, or about a 0.7ns
> degradation or so:
Even if the absolute value for the overhead is small, we seems have a
degradation of about 20% for tasklet_page_pool01_fast_path testcase,
which seems scary.
I am assuming that every page is recyclable for tasklet_page_pool01_fast_path
testcase, and that code path matters for page_pool, it would be good to
remove any additional checking for that code path.
And we already have pool->has_init_callback checking when we have to use
a new page, it may make sense to refactor that to share the same checking
for provider to avoid the overhead as much as possible.
Also, I am not sure if it really matter that much, as with the introducing
of netmem_is_net_iov() checking spreading in the networking, the overhead
might add up for other case too.
>
> [ 498.226127] time_bench: Type:tasklet_page_pool01_fast_path Per
> elem: 10 cycles(tsc) 3.944 ns (step:0) - (measurement period
> time:0.039442539 sec time_interval:39442539) - (invoke count:10000000
> tsc_interval:106485268)
>
> I took the time to dig into where the degradation comes from, and to
> my surprise we can shave off 1 cycle in perf by removing the
> static_branch_unlikely check in netmem_is_net_iov() like so:
>
> diff --git a/include/net/netmem.h b/include/net/netmem.h
> index fe354d11a421..2b4310ac1115 100644
> --- a/include/net/netmem.h
> +++ b/include/net/netmem.h
> @@ -122,8 +122,7 @@ typedef unsigned long __bitwise netmem_ref;
> static inline bool netmem_is_net_iov(const netmem_ref netmem)
> {
> #ifdef CONFIG_PAGE_POOL
> - return static_branch_unlikely(&page_pool_mem_providers) &&
> - (__force unsigned long)netmem & NET_IOV;
> + return (__force unsigned long)netmem & NET_IOV;
> #else
> return false;
> #endif
>
> With this change, the fast path is 9 cycles, only a 1 cycle (~0.35ns)
> regression:
>
> [ 199.184429] time_bench: Type:tasklet_page_pool01_fast_path Per
> elem: 9 cycles(tsc) 3.552 ns (step:0) - (measurement period
> time:0.035524013 sec time_interval:35524013) - (invoke count:10000000
> tsc_interval:95907775)
>
> I did some digging with YiFei on why the static_branch_unlikely
> appears to be causing a 1 cycle regression, but could not get an
> answer that makes sense. The # of instructions in
> page_pool_return_page() with the static_branch_unlikely and without is
> about the same in the compiled .o file, and my understanding is that
> static_branch will cause code re-writing anyway so looking at the
> compiled code may not be representative.
>
> Worthy of note is that I get ~95% line rate of devmem TCP regardless
> of the static_branch_unlikely() or not, so impact of the static_branch
> is not large enough to be measurable end-to-end. I'm thinking I want
> to drop the static_branch_unlikely() in the next RFC since it doesn't
> improve the end-to-end throughput number and is resulting in a
> measurable improvement in the page pool benchmark.
>
Powered by blists - more mailing lists