[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae07144c-9295-4c9d-a400-153bb689fe9e@huawei.com>
Date: Wed, 12 Mar 2025 20:04:26 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>, Yunsheng
Lin <yunshenglin0825@...il.com>, Andrew Morton <akpm@...ux-foundation.org>,
Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas
<ilias.apalodimas@...aro.org>, "David S. Miller" <davem@...emloft.net>
CC: Yonglong Liu <liuyonglong@...wei.com>, Mina Almasry
<almasrymina@...gle.com>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski
<kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman
<horms@...nel.org>, <linux-mm@...ck.org>, <netdev@...r.kernel.org>
Subject: Re: [RFC PATCH net-next] page_pool: Track DMA-mapped pages and unmap
them when destroying the pool
On 2025/3/11 21:26, Toke Høiland-Jørgensen wrote:
> Yunsheng Lin <linyunsheng@...wei.com> writes:
>
>> On 2025/3/10 23:24, Toke Høiland-Jørgensen wrote:
>>
>>>>
>>>> I guess that is one of the disadvantages that an advanced struct like
>>>> Xarray is used:(
>>>
>>> Sure, there will be some overhead from using xarray, but I think the
>>> simplicity makes up for it; especially since we can limit this to the
>>
>> As my understanding, it is more complicated, it is just that
>> complexity is hidden before xarray now.
>
> Yes, which encapsulates the complexity into a shared abstraction that is
> widely used in the kernel, so it does not add new complexity to the
> kernel as a whole. Whereas your series adds a whole bunch of new
> complexity to the kernel in the form of a new slab allocator.
>
>> Even if there is no space in 'struct page' to store the id, the
>> 'struct page' pointer itself can be used as id if the xarray can
>> use pointer as id. But it might mean the memory utilization might
>> not be as efficient as it should be, and performance hurts too if
>> there is more memory to be allocated and freed.
>
> I don't think it can. But sure, there can be other ways around this.
>
> FWIW, I don't think your idea of allocating page_pool_items to use as an
> indirection is totally crazy, but all the complexity around it (the
> custom slab allocator etc) is way too much. And if we can avoid the item
> indirection that is obviously better.
>
>> It seems it is just a matter of choices between using tailor-made
>> page_pool specific optimization and using some generic advanced
>> struct like xarray.
>
> Yup, basically.
>
>> I chose the tailor-made one because it ensure least overhead as
>> much as possibe from performance and memory utilization perspective,
>> for example, the 'single producer, multiple consumer' guarantee
>> offered by NAPI context can avoid some lock and atomic operation.
>
> Right, and my main point is that the complexity of this is not worth it :)
Without the complexity, there is about 400ns performance overhead
for Xarray compared to about 10ns performance overhead for the
tailor-made one, which means there is about more than 200% performance
degradation for page_pool03_slow test case:
[ 9190.217338] bench_page_pool_simple: Loaded
[ 9190.298495] time_bench: Type:for_loop Per elem: 0 cycles(tsc) 0.770 ns (step:0) - (measurement period time:0.077040570 sec time_interval:77040570) - (invoke count:100000000 tsc_interval:7704049)
[ 9190.893294] time_bench: Type:atomic_inc Per elem: 0 cycles(tsc) 5.775 ns (step:0) - (measurement period time:0.577582060 sec time_interval:577582060) - (invoke count:100000000 tsc_interval:57758202)
[ 9191.061026] time_bench: Type:lock Per elem: 1 cycles(tsc) 15.017 ns (step:0) - (measurement period time:0.150170250 sec time_interval:150170250) - (invoke count:10000000 tsc_interval:15017020)
[ 9191.771113] time_bench: Type:rcu Per elem: 0 cycles(tsc) 6.930 ns (step:0) - (measurement period time:0.693045930 sec time_interval:693045930) - (invoke count:100000000 tsc_interval:69304585)
[ 9231.309907] time_bench: Type:xarray Per elem: 39 cycles(tsc) 395.218 ns (step:0) - (measurement period time:39.521827650 sec time_interval:39521827650) - (invoke count:100000000 tsc_interval:3952182703)
[ 9231.327827] bench_page_pool_simple: time_bench_page_pool01_fast_path(): Cannot use page_pool fast-path
[ 9231.640260] time_bench: Type:no-softirq-page_pool01 Per elem: 3 cycles(tsc) 30.316 ns (step:0) - (measurement period time:0.303162870 sec time_interval:303162870) - (invoke count:10000000 tsc_interval:30316281)
[ 9231.658866] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): Cannot use page_pool fast-path
[ 9232.244955] time_bench: Type:no-softirq-page_pool02 Per elem: 5 cycles(tsc) 57.691 ns (step:0) - (measurement period time:0.576910280 sec time_interval:576910280) - (invoke count:10000000 tsc_interval:57691015)
[ 9232.263567] bench_page_pool_simple: time_bench_page_pool03_slow(): Cannot use page_pool fast-path
[ 9233.990491] time_bench: Type:no-softirq-page_pool03 Per elem: 17 cycles(tsc) 171.809 ns (step:0) - (measurement period time:1.718090950 sec time_interval:1718090950) - (invoke count:10000000 tsc_interval:171809088)
[ 9234.011402] bench_page_pool_simple: pp_tasklet_handler(): in_serving_softirq fast-path
[ 9234.019286] bench_page_pool_simple: time_bench_page_pool01_fast_path(): in_serving_softirq fast-path
[ 9234.328952] time_bench: Type:tasklet_page_pool01_fast_path Per elem: 3 cycles(tsc) 30.057 ns (step:0) - (measurement period time:0.300574060 sec time_interval:300574060) - (invoke count:10000000 tsc_interval:30057400)
[ 9234.348155] bench_page_pool_simple: time_bench_page_pool02_ptr_ring(): in_serving_softirq fast-path
[ 9234.898627] time_bench: Type:tasklet_page_pool02_ptr_ring Per elem: 5 cycles(tsc) 54.146 ns (step:0) - (measurement period time:0.541466850 sec time_interval:541466850) - (invoke count:10000000 tsc_interval:54146680)
[ 9234.917742] bench_page_pool_simple: time_bench_page_pool03_slow(): in_serving_softirq fast-path
[ 9236.691076] time_bench: Type:tasklet_page_pool03_slow Per elem: 17 cycles(tsc) 176.467 ns (step:0) - (measurement period time:1.764675290 sec time_interval:1764675290) - (invoke count:10000000 tsc_interval:176467523)
Tested using the below diff:
+++ b/kernel/lib/bench_page_pool_simple.c
@@ -149,6 +149,48 @@ static int time_bench_rcu(
return loops_cnt;
}
+static int time_bench_xarray(
+ struct time_bench_record *rec, void *data)
+{
+ uint64_t loops_cnt = 0;
+ struct xarray dma_mapped;
+ int i, err;
+ u32 id;
+ void *old;
+
+ xa_init_flags(&dma_mapped, XA_FLAGS_ALLOC1);
+
+ time_bench_start(rec);
+ /** Loop to measure **/
+ for (i = 0; i < rec->loops; i++) {
+
+ if (in_softirq())
+ err = xa_alloc(&dma_mapped, &id, &dma_mapped,
+ XA_LIMIT(1, UINT_MAX), GFP_ATOMIC);
+ else
+ err = xa_alloc_bh(&dma_mapped, &id, &dma_mapped,
+ XA_LIMIT(1, UINT_MAX), GFP_KERNEL);
+
+ if (err)
+ break;
+
+ loops_cnt++;
+ barrier(); /* avoid compiler to optimize this loop */
+
+ if (in_softirq())
+ old = xa_cmpxchg(&dma_mapped, id, &dma_mapped, NULL, 0);
+ else
+ old = xa_cmpxchg_bh(&dma_mapped, id, &dma_mapped, NULL, 0);
+
+ if (old != &dma_mapped)
+ break;
+ }
+ time_bench_stop(rec, loops_cnt);
+
+ xa_destroy(&dma_mapped);
+ return loops_cnt;
+}
+
/* Helper for filling some page's into ptr_ring */
static void pp_fill_ptr_ring(struct page_pool *pp, int elems)
{
@@ -334,6 +376,8 @@ static int run_benchmark_tests(void)
"lock", NULL, time_bench_lock);
time_bench_loop(nr_loops*10, 0,
"rcu", NULL, time_bench_rcu);
+ time_bench_loop(nr_loops*10, 0,
+ "xarray", NULL, time_bench_xarray);
}
/* This test cannot activate correct code path, due to no-softirq ctx */
>
>>> cases where it's absolutely needed.
>>
>> The above can also be done for using page_pool_item too as the
>> lower 2 bits can be used to indicate the pointer in 'struct page'
>> is 'page_pool_item' or 'page_pool', I just don't think it is
>> necessary yet as it might add more checking in the fast path.
>
> Yup, did think about using the lower bits to distinguish if it does turn
> out that we can't avoid an indirection. See above; it's not actually the
The 'memdesc' seems like an indirection to me when using that to shrink
'struct page' to a smaller size.
> page_pool_item concept that is my main issue with your series.
>
> -Toke
>
>
Powered by blists - more mailing lists