[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260123082349.42663-2-alibuda@linux.alibaba.com>
Date: Fri, 23 Jan 2026 16:23:47 +0800
From: "D. Wythe" <alibuda@...ux.alibaba.com>
To: "David S. Miller" <davem@...emloft.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dust Li <dust.li@...ux.alibaba.com>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Sidraya Jayagond <sidraya@...ux.ibm.com>,
Uladzislau Rezki <urezki@...il.com>,
Wenjia Zhang <wenjia@...ux.ibm.com>
Cc: Mahanta Jambigi <mjambigi@...ux.ibm.com>,
Simon Horman <horms@...nel.org>,
Tony Lu <tonylu@...ux.alibaba.com>,
Wen Gu <guwen@...ux.alibaba.com>,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-rdma@...r.kernel.org,
linux-s390@...r.kernel.org,
netdev@...r.kernel.org,
oliver.yang@...ux.alibaba.com
Subject: [PATCH net-next 1/3] net/smc: cap allocation order for SMC-R physically contiguous buffers
The alloc_page() cannot satisfy requests exceeding MAX_PAGE_ORDER,
and attempting such allocations will lead to guaranteed failures
and potential kernel warnings.
For SMCR_PHYS_CONT_BUFS, the allocation order is now capped to
MAX_PAGE_ORDER, ensures the attempts to allocate the largest possible
physically contiguous chunk instead of failing with an invalid order,
which also avoid redundant "try-fail-degrade" cycles in __smc_buf_create().
For SMCR_MIXED_BUFS, If it's order exceeds MAX_PAGE_ORDER, skips the
doomed physical allocation attempt and fallback to virtual memory
immediately.
Signed-off-by: D. Wythe <alibuda@...ux.alibaba.com>
Reviewed-by: Dust Li <dust.li@...ux.alibaba.com>
---
net/smc/smc_core.c | 28 ++++++++++++++++------------
1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index e4eabc83719e..6219db498976 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -2324,26 +2324,30 @@ static struct smc_buf_desc *smcr_new_buf_create(struct smc_link_group *lgr,
if (!buf_desc)
return ERR_PTR(-ENOMEM);
+ buf_desc->order = get_order(bufsize);
+
switch (lgr->buf_type) {
case SMCR_PHYS_CONT_BUFS:
+ buf_desc->order = min(buf_desc->order, MAX_PAGE_ORDER);
+ fallthrough;
case SMCR_MIXED_BUFS:
- buf_desc->order = get_order(bufsize);
- buf_desc->pages = alloc_pages(GFP_KERNEL | __GFP_NOWARN |
- __GFP_NOMEMALLOC | __GFP_COMP |
- __GFP_NORETRY | __GFP_ZERO,
- buf_desc->order);
- if (buf_desc->pages) {
- buf_desc->cpu_addr =
- (void *)page_address(buf_desc->pages);
- buf_desc->len = bufsize;
- buf_desc->is_vm = false;
- break;
+ if (buf_desc->order <= MAX_PAGE_ORDER) {
+ buf_desc->pages = alloc_pages(GFP_KERNEL | __GFP_NOWARN |
+ __GFP_NOMEMALLOC | __GFP_COMP |
+ __GFP_NORETRY | __GFP_ZERO,
+ buf_desc->order);
+ if (buf_desc->pages) {
+ buf_desc->cpu_addr =
+ (void *)page_address(buf_desc->pages);
+ buf_desc->len = bufsize;
+ buf_desc->is_vm = false;
+ break;
+ }
}
if (lgr->buf_type == SMCR_PHYS_CONT_BUFS)
goto out;
fallthrough; // try virtually contiguous buf
case SMCR_VIRT_CONT_BUFS:
- buf_desc->order = get_order(bufsize);
buf_desc->cpu_addr = vzalloc(PAGE_SIZE << buf_desc->order);
if (!buf_desc->cpu_addr)
goto out;
--
2.45.0
Powered by blists - more mailing lists