[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <rozn3jx2kbtlpcfvymykyqp2wapqw3jp4wkv6ehrzfqynokr7z@eij4fqog2ldu>
Date: Mon, 13 Oct 2025 04:05:36 -0700
From: Breno Leitao <leitao@...ian.org>
To: "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
kernel-team@...a.com, gustavold@...il.com
Subject: Re: [PATCH net] netpoll: Fix deadlock caused by memory allocation
under spinlock
On Mon, Oct 13, 2025 at 02:42:29AM -0700, Breno Leitao wrote:
> Fix a AA deadlock in refill_skbs() where memory allocation while holding
> skb_pool->lock can trigger a recursive lock acquisition attempt.
>
> The deadlock scenario occurs when the system is under severe memory
> pressure:
>
> 1. refill_skbs() acquires skb_pool->lock (spinlock)
> 2. alloc_skb() is called while holding the lock
> 3. Memory allocator fails and calls slab_out_of_memory()
> 4. This triggers printk() for the OOM warning
> 5. The console output path calls netpoll_send_udp()
> 6. netpoll_send_udp() attempts to acquire the same skb_pool->lock
> 7. Deadlock: the lock is already held by the same CPU
>
> Call stack:
> refill_skbs()
> spin_lock_irqsave(&skb_pool->lock) <- lock acquired
> __alloc_skb()
> kmem_cache_alloc_node_noprof()
> slab_out_of_memory()
> printk()
> console_flush_all()
> netpoll_send_udp()
> skb_dequeue()
> spin_lock_irqsave() <- deadlock attempt
>
> Refactor refill_skbs() to never allocate memory while holding
> the spinlock.
>
> Signed-off-by: Breno Leitao <leitao@...ian.org>
> Fixes: 1da177e4c3f41 ("Linux-2.6.12-rc2")
> ---
> net/core/netpoll.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/net/core/netpoll.c b/net/core/netpoll.c
> index 60a05d3b7c249..788cec4d527f8 100644
> --- a/net/core/netpoll.c
> +++ b/net/core/netpoll.c
> @@ -232,14 +232,26 @@ static void refill_skbs(struct netpoll *np)
>
> skb_pool = &np->skb_pool;
>
> - spin_lock_irqsave(&skb_pool->lock, flags);
> - while (skb_pool->qlen < MAX_SKBS) {
> + while (1) {
> + spin_lock_irqsave(&skb_pool->lock, flags);
> + if (skb_pool->qlen >= MAX_SKBS)
> + goto unlock;
> + spin_unlock_irqrestore(&skb_pool->lock, flags);
> +
> skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC);
> if (!skb)
> - break;
> + return;
>
> + spin_lock_irqsave(&skb_pool->lock, flags);
> + if (skb_pool->qlen >= MAX_SKBS)
> + /* Discard if len got increased (TOCTOU) */
> + goto discard;
> __skb_queue_tail(skb_pool, skb);
> + spin_unlock_irqrestore(&skb_pool->lock, flags);
> }
We probably want to return in here, as pointed out by Rik van Riel
offline.
If there are no more concerns, I will wait the 24-hours period and send
a v2.
Powered by blists - more mailing lists