[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b025e2b7-ff99-4659-811c-8071d4aa8031@intel.com>
Date: Wed, 26 Mar 2025 18:40:27 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>
CC: "David S. Miller" <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>, Saeed Mahameed <saeedm@...dia.com>,
Leon Romanovsky <leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>, "Andrew
Lunn" <andrew+netdev@...n.ch>, Eric Dumazet <edumazet@...gle.com>, Paolo
Abeni <pabeni@...hat.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
"Simon Horman" <horms@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
"Mina Almasry" <almasrymina@...gle.com>, Yonglong Liu
<liuyonglong@...wei.com>, Yunsheng Lin <linyunsheng@...wei.com>, Pavel
Begunkov <asml.silence@...il.com>, Matthew Wilcox <willy@...radead.org>,
<netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH net-next v3 2/3] page_pool: Turn dma_sync into a
full-width bool field
From: Toke Høiland-Jørgensen <toke@...hat.com>
Date: Wed, 26 Mar 2025 09:18:39 +0100
> Change the single-bit boolean for dma_sync into a full-width bool, so we
> can read it as volatile with READ_ONCE(). A subsequent patch will add
> writing with WRITE_ONCE() on teardown.
Don't we have something like READ_ONCE(), but for one bit? Like
atomic-load-blah?
>
> Reviewed-by: Mina Almasry <almasrymina@...gle.com>
> Tested-by: Yonglong Liu <liuyonglong@...wei.com>
> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
> ---
> include/net/page_pool/types.h | 6 +++---
> net/core/page_pool.c | 2 +-
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
> index df0d3c1608929605224feb26173135ff37951ef8..d6c93150384fbc4579bb0d0afb357ebb26c564a3 100644
> --- a/include/net/page_pool/types.h
> +++ b/include/net/page_pool/types.h
> @@ -173,10 +173,10 @@ struct page_pool {
> int cpuid;
> u32 pages_state_hold_cnt;
>
> - bool has_init_callback:1; /* slow::init_callback is set */
> + bool dma_sync; /* Perform DMA sync for device */
> + bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */
> bool dma_map:1; /* Perform DMA mapping */
> - bool dma_sync:1; /* Perform DMA sync for device */
> - bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */
> + bool has_init_callback:1; /* slow::init_callback is set */
> #ifdef CONFIG_PAGE_POOL_STATS
> bool system:1; /* This is a global percpu pool */
> #endif
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index acef1fcd8ddcfd1853a6f2055c1f1820ab248e8d..fb32768a97765aacc7f1103bfee38000c988b0de 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -466,7 +466,7 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
> netmem_ref netmem,
> u32 dma_sync_size)
> {
> - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
> + if (READ_ONCE(pool->dma_sync) && dma_dev_need_sync(pool->p.dev))
> __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
> }
Thanks,
Olek
Powered by blists - more mailing lists