[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250327-page-pool-track-dma-v4-2-b380dc6706d0@redhat.com>
Date: Thu, 27 Mar 2025 11:44:12 +0100
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>,
Saeed Mahameed <saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
Tariq Toukan <tariqt@...dia.com>, Andrew Lunn <andrew+netdev@...n.ch>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Simon Horman <horms@...nel.org>, Andrew Morton <akpm@...ux-foundation.org>,
Mina Almasry <almasrymina@...gle.com>,
Yonglong Liu <liuyonglong@...wei.com>,
Yunsheng Lin <linyunsheng@...wei.com>,
Pavel Begunkov <asml.silence@...il.com>,
Matthew Wilcox <willy@...radead.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-mm@...ck.org,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: [PATCH net-next v4 2/3] page_pool: Turn dma_sync into a full-width
bool field
Change the single-bit boolean for dma_sync into a full-width bool, so we
can read it as volatile with READ_ONCE(). A subsequent patch will add
writing with WRITE_ONCE() on teardown.
Reviewed-by: Mina Almasry <almasrymina@...gle.com>
Tested-by: Yonglong Liu <liuyonglong@...wei.com>
Acked-by: Jesper Dangaard Brouer <hawk@...nel.org>
Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
---
include/net/page_pool/types.h | 6 +++---
net/core/page_pool.c | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index df0d3c1608929605224feb26173135ff37951ef8..d6c93150384fbc4579bb0d0afb357ebb26c564a3 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -173,10 +173,10 @@ struct page_pool {
int cpuid;
u32 pages_state_hold_cnt;
- bool has_init_callback:1; /* slow::init_callback is set */
+ bool dma_sync; /* Perform DMA sync for device */
+ bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */
bool dma_map:1; /* Perform DMA mapping */
- bool dma_sync:1; /* Perform DMA sync for device */
- bool dma_sync_for_cpu:1; /* Perform DMA sync for cpu */
+ bool has_init_callback:1; /* slow::init_callback is set */
#ifdef CONFIG_PAGE_POOL_STATS
bool system:1; /* This is a global percpu pool */
#endif
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 7745ad924ae2d801580a6760eba9393e1cf67b01..c75d2add42b887f9a3a74e3fb1b3b8dc34ea72b1 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -463,7 +463,7 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
netmem_ref netmem,
u32 dma_sync_size)
{
- if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
+ if (READ_ONCE(pool->dma_sync) && dma_dev_need_sync(pool->p.dev))
__page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
}
--
2.48.1
Powered by blists - more mailing lists