[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1bef4a35-efaa-4083-8ed5-8818fe285db5@huawei.com>
Date: Thu, 16 Jan 2025 20:52:04 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, <davem@...emloft.net>,
<kuba@...nel.org>, <pabeni@...hat.com>
CC: <zhangkun09@...wei.com>, <liuyonglong@...wei.com>,
<fanghaiqing@...wei.com>, Alexander Lobakin <aleksander.lobakin@...el.com>,
Robin Murphy <robin.murphy@....com>, Alexander Duyck
<alexander.duyck@...il.com>, Andrew Morton <akpm@...ux-foundation.org>, IOMMU
<iommu@...ts.linux.dev>, MM <linux-mm@...ck.org>, Alexei Starovoitov
<ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>, John Fastabend
<john.fastabend@...il.com>, Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
<netdev@...r.kernel.org>, <intel-wired-lan@...ts.osuosl.org>,
<bpf@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <linux-mediatek@...ts.infradead.org>
Subject: Re: [PATCH net-next v7 0/8] fix two bugs related to page_pool
On 2025/1/16 1:40, Jesper Dangaard Brouer wrote:
>
>
> On 15/01/2025 12.33, Yunsheng Lin wrote:
>> On 2025/1/14 22:31, Jesper Dangaard Brouer wrote:
>>>
>>>
>>> On 10/01/2025 14.06, Yunsheng Lin wrote:
>>>> This patchset fix a possible time window problem for page_pool and
>>>> the dma API misuse problem as mentioned in [1], and try to avoid the
>>>> overhead of the fixing using some optimization.
>>>>
>>>> From the below performance data, the overhead is not so obvious
>>>> due to performance variations for time_bench_page_pool01_fast_path()
>>>> and time_bench_page_pool02_ptr_ring, and there is about 20ns overhead
>>>> for time_bench_page_pool03_slow() for fixing the bug.
>>>>
>>>
>>> My benchmarking on x86_64 CPUs looks significantly different.
>>> - CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
>>>
>>> Benchmark (bench_page_pool_simple) results from before and after patchset:
>>>
>>> | Test name | Cycles | | |Nanosec | | | % |
>>> | (tasklet_*)| Before | After |diff| Before | After | diff | change |
>>> |------------+--------+-------+----+--------+--------+-------+--------|
>>> | fast_path | 19 | 24 | 5| 5.399 | 6.928 | 1.529 | 28.3 |
>>> | ptr_ring | 54 | 79 | 25| 15.090 | 21.976 | 6.886 | 45.6 |
>>> | slow | 238 | 299 | 61| 66.134 | 83.298 |17.164 | 26.0 |
>>> #+TBLFM: $4=$3-$2::$7=$6-$5::$8=(($7/$5)*100);%.1f
>>>
>>> My above testing show a clear performance regressions across three
>>> different page_pool operating modes.
>>
>> I retested it on arm64 server patch by patch as the raw performance
>> data in the attachment, it seems the result seemed similar as before.
>>
>> Before this patchset:
>> fast_path ptr_ring slow
>> 1. 31.171 ns 60.980 ns 164.917 ns
>> 2. 28.824 ns 60.891 ns 170.241 ns
>> 3. 14.236 ns 60.583 ns 164.355 ns
>>
>> With patch 1-4:
>> 4. 31.443 ns 53.242 ns 210.148 ns
>> 5. 31.406 ns 53.270 ns 210.189 ns
>>
>> With patch 1-5:
>> 6. 26.163 ns 53.781 ns 189.450 ns
>> 7. 26.189 ns 53.798 ns 189.466 ns
>>
>> With patch 1-8:
>> 8. 28.108 ns 68.199 ns 202.516 ns
>> 9. 16.128 ns 55.904 ns 202.711 ns
>>
>> I am not able to get hold of a x86 server yet, I might be able
>> to get one during weekend.
>>
>> Theoretically, patch 1-4 or 1-5 should not have much performance
>> impact for fast_path and ptr_ring except for the rcu_lock mentioned
>> in page_pool_napi_local(), so it would be good if patch 1-5 is also
>> tested in your testlab with the rcu_lock removing in
>> page_pool_napi_local().
>>
>
> What are you saying?
> - (1) test patch 1-5
> - or (2) test patch 1-5 but revert patch 2 with page_pool_napi_local()
patch 1-5 with below applied.
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -1207,10 +1207,8 @@ static bool page_pool_napi_local(const struct page_pool *pool)
/* Synchronizated with page_pool_destory() to avoid use-after-free
* for 'napi'.
*/
- rcu_read_lock();
napi = READ_ONCE(pool->p.napi);
napi_local = napi && READ_ONCE(napi->list_owner) == cpuid;
- rcu_read_unlock();
return napi_local;
}
Powered by blists - more mailing lists