[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240304094950.761233-1-dtatulea@nvidia.com>
Date: Mon, 4 Mar 2024 11:48:52 +0200
From: Dragos Tatulea <dtatulea@...dia.com>
To: Steffen Klassert <steffen.klassert@...unet.com>, Herbert Xu
<herbert@...dor.apana.org.au>, "David S. Miller" <davem@...emloft.net>,
"David Ahern" <dsahern@...nel.org>, Eric Dumazet <edumazet@...gle.com>,
"Jakub Kicinski" <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
CC: <leonro@...dia.com>, <gal@...dia.com>, Dragos Tatulea
<dtatulea@...dia.com>, "Anatoli N . Chechelnickiy"
<Anatoli.Chechelnickiy@...nterpipe.biz>, Ian Kumlien <ian.kumlien@...il.com>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: [RFC] net: esp: fix bad handling of pages from page_pool
When the skb is reorganized during esp_output (!esp->inline), the pages
coming from the original skb fragments are supposed to be released back
to the system through put_page. But if the skb fragment pages are
originating from a page_pool, calling put_page on them will trigger a
page_pool leak which will eventually result in a crash.
This leak can be easily observed when using CONFIG_DEBUG_VM and doing
ipsec + gre (non offloaded) forwarding:
BUG: Bad page state in process ksoftirqd/16 pfn:1451b6
page:00000000de2b8d32 refcount:0 mapcount:0 mapping:0000000000000000 index:0x1451b6000 pfn:0x1451b6
flags: 0x200000000000000(node=0|zone=2)
page_type: 0xffffffff()
raw: 0200000000000000 dead000000000040 ffff88810d23c000 0000000000000000
raw: 00000001451b6000 0000000000000001 00000000ffffffff 0000000000000000
page dumped because: page_pool leak
Modules linked in: ip_gre gre mlx5_ib mlx5_core xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat nf_nat xt_addrtype br_netfilter rpcrdma rdma_ucm ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm ib_uverbs ib_core overlay zram zsmalloc fuse [last unloaded: mlx5_core]
CPU: 16 PID: 96 Comm: ksoftirqd/16 Not tainted 6.8.0-rc4+ #22
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x36/0x50
bad_page+0x70/0xf0
free_unref_page_prepare+0x27a/0x460
free_unref_page+0x38/0x120
esp_ssg_unref.isra.0+0x15f/0x200
esp_output_tail+0x66d/0x780
esp_xmit+0x2c5/0x360
validate_xmit_xfrm+0x313/0x370
? validate_xmit_skb+0x1d/0x330
validate_xmit_skb_list+0x4c/0x70
sch_direct_xmit+0x23e/0x350
__dev_queue_xmit+0x337/0xba0
? nf_hook_slow+0x3f/0xd0
ip_finish_output2+0x25e/0x580
iptunnel_xmit+0x19b/0x240
ip_tunnel_xmit+0x5fb/0xb60
ipgre_xmit+0x14d/0x280 [ip_gre]
dev_hard_start_xmit+0xc3/0x1c0
__dev_queue_xmit+0x208/0xba0
? nf_hook_slow+0x3f/0xd0
ip_finish_output2+0x1ca/0x580
ip_sublist_rcv_finish+0x32/0x40
ip_sublist_rcv+0x1b2/0x1f0
? ip_rcv_finish_core.constprop.0+0x460/0x460
ip_list_rcv+0x103/0x130
__netif_receive_skb_list_core+0x181/0x1e0
netif_receive_skb_list_internal+0x1b3/0x2c0
napi_gro_receive+0xc8/0x200
gro_cell_poll+0x52/0x90
__napi_poll+0x25/0x1a0
net_rx_action+0x28e/0x300
__do_softirq+0xc3/0x276
? sort_range+0x20/0x20
run_ksoftirqd+0x1e/0x30
smpboot_thread_fn+0xa6/0x130
kthread+0xcd/0x100
? kthread_complete_and_exit+0x20/0x20
ret_from_fork+0x31/0x50
? kthread_complete_and_exit+0x20/0x20
ret_from_fork_asm+0x11/0x20
</TASK>
The suggested fix is to use the page_pool release API first and then fallback
to put_page.
Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
Reported-by: Anatoli N.Chechelnickiy <Anatoli.Chechelnickiy@...nterpipe.biz>
Reported-by: Ian Kumlien <ian.kumlien@...il.com>
Change-Id: I263cf91c1d13c2736a58927e8e0fc51296759450
---
net/ipv4/esp4.c | 11 ++++++++---
net/ipv6/esp6.c | 11 ++++++++---
2 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c
index 4dd9e5040672..3e07d78c887d 100644
--- a/net/ipv4/esp4.c
+++ b/net/ipv4/esp4.c
@@ -112,9 +112,14 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
/* Unref skb_frag_pages in the src scatterlist if necessary.
* Skip the first sg which comes from skb->data.
*/
- if (req->src != req->dst)
- for (sg = sg_next(req->src); sg; sg = sg_next(sg))
- put_page(sg_page(sg));
+ if (req->src != req->dst) {
+ for (sg = sg_next(req->src); sg; sg = sg_next(sg)) {
+ struct page *page = sg_page(sg);
+
+ if (!napi_pp_put_page(page, false))
+ put_page(page);
+ }
+ }
}
#ifdef CONFIG_INET_ESPINTCP
diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c
index 6e6efe026cdc..b73f5773139d 100644
--- a/net/ipv6/esp6.c
+++ b/net/ipv6/esp6.c
@@ -129,9 +129,14 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp)
/* Unref skb_frag_pages in the src scatterlist if necessary.
* Skip the first sg which comes from skb->data.
*/
- if (req->src != req->dst)
- for (sg = sg_next(req->src); sg; sg = sg_next(sg))
- put_page(sg_page(sg));
+ if (req->src != req->dst) {
+ for (sg = sg_next(req->src); sg; sg = sg_next(sg)) {
+ struct page *page = sg_page(sg);
+
+ if (!napi_pp_put_page(page, false))
+ put_page(page);
+ }
+ }
}
#ifdef CONFIG_INET6_ESPINTCP
--
2.42.0
Powered by blists - more mailing lists