[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMnf+Pg4BLVKAGsr9iuF1uH-GMOiyb8OW0nKQSEKmjJvXj+t1g@mail.gmail.com>
Date: Wed, 30 Oct 2019 14:30:27 -0500
From: JD <jdtxs00@...il.com>
To: netdev@...r.kernel.org
Cc: steffen.klassert@...unet.com
Subject: Followup: Kernel memory leak on 4.11+ & 5.3.x with IPsec
Hello, this is a followup to my previous email I sent regarding a
kernel memory leak with IPsec.
After a lot of testing and narrowing down, I've figured out the leak
begins as of the kernel 4.11 release. It is still occurring in the
latest mainline kernel too.
For brief context, there's a kernel memory leak in IPsec where passing
traffic through the tunnel eats away at available memory and OOMkiller
kicks in. This memory usage doesn't appear in slab or userspace. Nor
is it reclaimed by bringing down the tunnels, or unloading the
respective kernel modules. The only way to get the memory back is by
rebooting.
To keep things simple, here are some facts around the issue:
- It is definitely related to IPsec/xfrm in some way. The boxes I have
tested on are fresh installs, no other software or customization
whatsoever. Only used for IPsec tunnels.
- Memory can leak at a rate of ~150MB per day.
- The issue begins as of kernel 4.11. Kernel 4.10 does not have this leak.
- You can only reproduce the problem by passing traffic through
multiple IPsec tunnels. Keeping the tunnels idle does not eat away at
memory.
- The issue affects the current mainline kernel.
- Ubuntu 19.10/CentOS 7 & RHEL 8 have been tested, all exhibit the behavior.
- The issue happens on both bare metal and virtual machines.
- kmemleak does not produce any results, however, memleak-bpfcc does.
I have attached the output of meminfo, slabinfo and the results from
"memleak-bpfcc 3 -o
600000" These are from a system running the 5.3.0 kernel on Ubuntu 19.10.
Also attached smem with dates which shows kernel memory growing by 2x.
This particular entry from memleak-bpfcc is interesting:
65536 bytes in 2 allocations from stack
__alloc_pages_nodemask+0x239 [kernel]
__alloc_pages_nodemask+0x239 [kernel]
alloc_pages_current+0x87 [kernel]
skb_page_frag_refill+0x80 [kernel]
esp_output_tail+0x3a5 [kernel]
esp_output+0x11f [kernel]
xfrm_output_resume+0x480 [kernel]
xfrm_output+0x81 [kernel]
xfrm4_output_finish+0x2b [kernel]
__xfrm4_output+0x44 [kernel]
xfrm4_output+0x3f [kernel]
ip_forward_finish+0x58 [kernel]
ip_forward+0x3f9 [kernel]
ip_rcv_finish+0x85 [kernel]
ip_rcv+0xbc [kernel]
__netif_receive_skb_one_core+0x87 [kernel]
__netif_receive_skb+0x18 [kernel]
netif_receive_skb_internal+0x45 [kernel]
napi_gro_receive+0xff [kernel]
receive_buf+0x175 [kernel]
virtnet_poll+0x158 [kernel]
net_rx_action+0x13a [kernel]
__softirqentry_text_start+0xe1 [kernel]
run_ksoftirqd+0x2b [kernel]
smpboot_thread_fn+0xd0 [kernel]
kthread+0x104 [kernel]
ret_from_fork+0x35 [kernel]
Here are some clear steps to reproduce:
- On your preferred OS, install an IPsec daemon/software
(strongswan/openswan/whatever)
- Setup a IKEv2 conn in tunnel mode. Use a RFC1918 private range for
your client IP pool. e.g: 10.2.0.0/16
- Enable IP forwarding (net.ipv4.ip_forward = 1)
- MASQUERADE the 10.2.0.0/16 range using iptables, e.g: "-A
POSTROUTING -s 10.2.0.0/16 -o eth0 -j MASQUERADE"
- Connect some IKEv2 clients (any device, any platform, doesn't
matter) and pass traffic through the tunnel.
^^ It speeds up the leak if you have multiple tunnels passing traffic
at the same time.
- Observe memory is lost over time and never recovered. Doesn't matter
if you restart the daemon, bring down the tunnels, or even unload
xfrm/ipsec modules. The memory goes into the void. Only way to reclaim
is by restarting completely.
Please let me know if anything further is needed to diagnose/debug
this problem. We're stuck with the 4.9 kernel because all newer
kernels leak memory. Any help or advice is appreciated.
Thank you.
View attachment "meminfo.txt" of type "text/plain" (1419 bytes)
View attachment "slabinfo.txt" of type "text/plain" (17826 bytes)
Download attachment "memleak-bpfcc.log" of type "application/octet-stream" (1838359 bytes)
View attachment "smem.txt" of type "text/plain" (367 bytes)
Powered by blists - more mailing lists