[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4abeeded-536e-be28-5409-8ad502674217@intel.com>
Date: Fri, 21 Jul 2023 17:48:25 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: <davem@...emloft.net>, <netdev@...r.kernel.org>, <edumazet@...gle.com>,
<pabeni@...hat.com>, <peterz@...radead.org>, <mingo@...hat.com>,
<will@...nel.org>, <longman@...hat.com>, <boqun.feng@...il.com>,
<hawk@...nel.org>, <ilias.apalodimas@...aro.org>
Subject: Re: [PATCH net-next] page_pool: add a lockdep check for recycling in
hardirq
From: Jakub Kicinski <kuba@...nel.org>
Date: Thu, 20 Jul 2023 10:37:51 -0700
> Page pool use in hardirq is prohibited, add debug checks
> to catch misuses. IIRC we previously discussed using
> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
> that people will have DEBUG_NET enabled in perf testing.
> I don't think anyone enables lockdep in perf testing,
> so use lockdep to avoid pushback and arguing :)
+1 patch to add to my tree to base my current series on...
Time to create separate repo named "page-pool-next"? :D
>
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
> CC: peterz@...radead.org
> CC: mingo@...hat.com
> CC: will@...nel.org
> CC: longman@...hat.com
> CC: boqun.feng@...il.com
> CC: hawk@...nel.org
> CC: ilias.apalodimas@...aro.org
> ---
> include/linux/lockdep.h | 7 +++++++
> net/core/page_pool.c | 4 ++++
> 2 files changed, 11 insertions(+)
>
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index 310f85903c91..dc2844b071c2 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -625,6 +625,12 @@ do { \
> WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
> } while (0)
>
> +#define lockdep_assert_no_hardirq() \
> +do { \
> + WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
> + !this_cpu_read(hardirqs_enabled))); \
> +} while (0)
> +
> #define lockdep_assert_preemption_enabled() \
> do { \
> WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \
> @@ -659,6 +665,7 @@ do { \
> # define lockdep_assert_irqs_enabled() do { } while (0)
> # define lockdep_assert_irqs_disabled() do { } while (0)
> # define lockdep_assert_in_irq() do { } while (0)
> +# define lockdep_assert_no_hardirq() do { } while (0)
>
> # define lockdep_assert_preemption_enabled() do { } while (0)
> # define lockdep_assert_preemption_disabled() do { } while (0)
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index a3e12a61d456..3ac760fcdc22 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -536,6 +536,8 @@ static void page_pool_return_page(struct page_pool *pool, struct page *page)
> static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page)
Crap can happen earlier. Imagine that some weird code asked for direct
recycling with IRQs disabled. Then, we can hit
__page_pool_put_page:page_pool_recycle_in_cache and who knows what can
happen.
Can't we add this assertion right to the beginning of
__page_pool_put_page()? It's reasonable enough, at least for me, and
wouldn't require any commentary splats. Unlike put_defragged_page() as
Yunsheng proposes :p
Other than that (which is debatable), looks fine to me.
> {
> int ret;
> +
> + lockdep_assert_no_hardirq();
> /* BH protection not needed if current is softirq */
> if (in_softirq())
> ret = ptr_ring_produce(&pool->ring, page);
> @@ -642,6 +644,8 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
> int i, bulk_len = 0;
> bool in_softirq;
>
> + lockdep_assert_no_hardirq();
> +
> for (i = 0; i < count; i++) {
> struct page *page = virt_to_head_page(data[i]);
>
Thanks,
Olek
Powered by blists - more mailing lists