[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250908152123.97829-1-kuba@kernel.org>
Date: Mon, 8 Sep 2025 08:21:23 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: davem@...emloft.net
Cc: netdev@...r.kernel.org,
edumazet@...gle.com,
pabeni@...hat.com,
andrew+netdev@...n.ch,
horms@...nel.org,
Jakub Kicinski <kuba@...nel.org>,
hawk@...nel.org,
ilias.apalodimas@...aro.org,
nathan@...nel.org,
nick.desaulniers+lkml@...il.com,
morbo@...gle.com,
justinstitt@...gle.com,
llvm@...ts.linux.dev
Subject: [PATCH net-next] page_pool: always add GFP_NOWARN for ATOMIC allocations
Driver authors often forget to add GFP_NOWARN for page allocation
from the datapath. This is annoying to operators as OOMs are a fact
of life, and we pretty much expect network Rx to hit page allocation
failures during OOM. Make page pool add GFP_NOWARN for ATOMIC allocations
by default.
Don't compare to GFP_ATOMIC because it's a mask with 2 bits set.
We want a single bit so that the compiler can do an unconditional
mask and shift. clang builds the condition as:
1c31: 89 e8 movl %ebp, %eax
1c33: 83 e0 20 andl $0x20, %eax
1c36: c1 e0 0d shll $0xd, %eax
1c39: 09 e8 orl %ebp, %eax
so there seems to be no need any more to use the old flag multiplication
tricks which is less readable. Pick the lowest bit out of GFP_ATOMIC
to limit the size of the instructions.
The specific change which makes me propose this is that bnxt, after
commit cd1fafe7da1f ("eth: bnxt: add support rx side device memory TCP"),
lost the GFP_NOWARN, again. It used to allocate with page_pool_dev_alloc_*
which added the NOWARN unconditionally. While switching to
__bnxt_alloc_rx_netmem() authors forgot to add NOWARN in the explicitly
specified flags.
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
---
CC: hawk@...nel.org
CC: ilias.apalodimas@...aro.org
CC: nathan@...nel.org
CC: nick.desaulniers+lkml@...il.com
CC: morbo@...gle.com
CC: justinstitt@...gle.com
CC: llvm@...ts.linux.dev
---
net/core/page_pool.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index ba70569bd4b0..6ffce0e821e4 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -555,6 +555,13 @@ static noinline netmem_ref __page_pool_alloc_netmems_slow(struct page_pool *pool
netmem_ref netmem;
int i, nr_pages;
+ /* Unconditionally set NOWARN if allocating from the datapath.
+ * Use a single bit from the ATOMIC mask to help compiler optimize.
+ */
+ BUILD_BUG_ON(!(GFP_ATOMIC & __GFP_HIGH));
+ if (gfp & __GFP_HIGH)
+ gfp |= __GFP_NOWARN;
+
/* Don't support bulk alloc for high-order pages */
if (unlikely(pp_order))
return page_to_netmem(__page_pool_alloc_page_order(pool, gfp));
--
2.51.0
Powered by blists - more mailing lists