[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230928025236.3051774-1-rkannoth@marvell.com>
Date: Thu, 28 Sep 2023 08:22:36 +0530
From: Ratheesh Kannoth <rkannoth@...vell.com>
To: <netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC: <sgoutham@...vell.com>, <gakula@...vell.com>,
<sbhatta@...vell.com>, <hkelam@...vell.com>, <davem@...emloft.net>,
<edumazet@...gle.com>, <kuba@...nel.org>, <pabeni@...hat.com>,
<rkannoth@...vell.com>, <hawk@...nel.org>,
<alexander.duyck@...il.com>, <ilias.apalodimas@...aro.org>,
<linyunsheng@...wei.com>, <bigeasy@...utronix.de>
Subject: [PATCH net] octeontx2-pf: Fix page pool frag allocation failure.
Since page pool param's "order" is set to 0, will result
in below crash if interface is configured higher rx buffer
length.
Steps to reproduce the issue.
1. devlink dev param set pci/0002:04:00.0 name receive_buffer_size \
value 8196 cmode runtime
2. ifconfig eth0 up
[ 19.901356] ------------[ cut here ]------------
[ 19.901361] WARNING: CPU: 11 PID: 12331 at net/core/page_pool.c:567 page_pool_alloc_frag+0x3c/0x230
[ 19.901449] pstate: 82401009 (Nzcv daif +PAN -UAO +TCO -DIT +SSBS BTYPE=--)
[ 19.901451] pc : page_pool_alloc_frag+0x3c/0x230
[ 19.901453] lr : __otx2_alloc_rbuf+0x60/0xbc [rvu_nicpf]
[ 19.901460] sp : ffff80000f66b970
[ 19.901461] x29: ffff80000f66b970 x28: 0000000000000000 x27: 0000000000000000
[ 19.901464] x26: ffff800000d15b68 x25: ffff000195b5c080 x24: ffff0002a5a32dc0
[ 19.901467] x23: ffff0001063c0878 x22: 0000000000000100 x21: 0000000000000000
[ 19.901469] x20: 0000000000000000 x19: ffff00016f781000 x18: 0000000000000000
[ 19.901472] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
[ 19.901474] x14: 0000000000000000 x13: ffff0005ffdc9c80 x12: 0000000000000000
[ 19.901477] x11: ffff800009119a38 x10: 4c6ef2e3ba300519 x9 : ffff800000d13844
[ 19.901479] x8 : ffff0002a5a33cc8 x7 : 0000000000000030 x6 : 0000000000000030
[ 19.901482] x5 : 0000000000000005 x4 : 0000000000000000 x3 : 0000000000000a20
[ 19.901484] x2 : 0000000000001080 x1 : ffff80000f66b9d4 x0 : 0000000000001000
[ 19.901487] Call trace:
[ 19.901488] page_pool_alloc_frag+0x3c/0x230
[ 19.901490] __otx2_alloc_rbuf+0x60/0xbc [rvu_nicpf]
[ 19.901494] otx2_rq_aura_pool_init+0x1c4/0x240 [rvu_nicpf]
[ 19.901498] otx2_open+0x228/0xa70 [rvu_nicpf]
[ 19.901501] otx2vf_open+0x20/0xd0 [rvu_nicvf]
[ 19.901504] __dev_open+0x114/0x1d0
[ 19.901507] __dev_change_flags+0x194/0x210
[ 19.901510] dev_change_flags+0x2c/0x70
[ 19.901512] devinet_ioctl+0x3a4/0x6c4
[ 19.901515] inet_ioctl+0x228/0x240
[ 19.901518] sock_ioctl+0x2ac/0x480
[ 19.901522] __arm64_sys_ioctl+0x564/0xe50
[ 19.901525] invoke_syscall.constprop.0+0x58/0xf0
[ 19.901529] do_el0_svc+0x58/0x150
[ 19.901531] el0_svc+0x30/0x140
[ 19.901533] el0t_64_sync_handler+0xe8/0x114
[ 19.901535] el0t_64_sync+0x1a0/0x1a4
[ 19.901537] ---[ end trace 678c0bf660ad8116 ]---
Fixes: b2e3406a38f0 ("octeontx2-pf: Add support for page pool")
Signed-off-by: Ratheesh Kannoth <rkannoth@...vell.com>
---
drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 997fedac3a98..78a4547e7001 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -1357,7 +1357,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
struct page_pool_params pp_params = { 0 };
struct npa_aq_enq_req *aq;
struct otx2_pool *pool;
- int err;
+ int err, sz;
pool = &pfvf->qset.pool[pool_id];
/* Alloc memory for stack which is used to store buffer pointers */
@@ -1403,6 +1403,8 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
return 0;
}
+ sz = ALIGN(ALIGN(SKB_DATA_ALIGN(buf_size), OTX2_ALIGN), PAGE_SIZE);
+ pp_params.order = (sz / PAGE_SIZE) - 1;
pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs);
pp_params.nid = NUMA_NO_NODE;
--
2.25.1
Powered by blists - more mailing lists