[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230720173752.2038136-1-kuba@kernel.org>
Date: Thu, 20 Jul 2023 10:37:51 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: davem@...emloft.net
Cc: netdev@...r.kernel.org,
edumazet@...gle.com,
pabeni@...hat.com,
Jakub Kicinski <kuba@...nel.org>,
peterz@...radead.org,
mingo@...hat.com,
will@...nel.org,
longman@...hat.com,
boqun.feng@...il.com,
hawk@...nel.org,
ilias.apalodimas@...aro.org
Subject: [PATCH net-next] page_pool: add a lockdep check for recycling in hardirq
Page pool use in hardirq is prohibited, add debug checks
to catch misuses. IIRC we previously discussed using
DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
that people will have DEBUG_NET enabled in perf testing.
I don't think anyone enables lockdep in perf testing,
so use lockdep to avoid pushback and arguing :)
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
---
CC: peterz@...radead.org
CC: mingo@...hat.com
CC: will@...nel.org
CC: longman@...hat.com
CC: boqun.feng@...il.com
CC: hawk@...nel.org
CC: ilias.apalodimas@...aro.org
---
include/linux/lockdep.h | 7 +++++++
net/core/page_pool.c | 4 ++++
2 files changed, 11 insertions(+)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 310f85903c91..dc2844b071c2 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -625,6 +625,12 @@ do { \
WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
} while (0)
+#define lockdep_assert_no_hardirq() \
+do { \
+ WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
+ !this_cpu_read(hardirqs_enabled))); \
+} while (0)
+
#define lockdep_assert_preemption_enabled() \
do { \
WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \
@@ -659,6 +665,7 @@ do { \
# define lockdep_assert_irqs_enabled() do { } while (0)
# define lockdep_assert_irqs_disabled() do { } while (0)
# define lockdep_assert_in_irq() do { } while (0)
+# define lockdep_assert_no_hardirq() do { } while (0)
# define lockdep_assert_preemption_enabled() do { } while (0)
# define lockdep_assert_preemption_disabled() do { } while (0)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index a3e12a61d456..3ac760fcdc22 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -536,6 +536,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
{
int ret;
+
+ lockdep_assert_no_hardirq();
/* BH protection not needed if current is softirq */
if (in_softirq())
ret = ptr_ring_produce(&pool->ring, page);
@@ -642,6 +644,8 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
int i, bulk_len = 0;
bool in_softirq;
+ lockdep_assert_no_hardirq();
+
for (i = 0; i < count; i++) {
struct page *page = virt_to_head_page(data[i]);
--
2.41.0
Powered by blists - more mailing lists