[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP01T74Axm22TTXSaphxZLF=mj7=PnN2SPB98UvWvGR4FW2U9Q@mail.gmail.com>
Date: Wed, 27 Sep 2023 10:42:57 +0200
From: Kumar Kartikeya Dwivedi <memxor@...il.com>
To: Hsin-Wei Hung <hsinweih@....edu>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <kafai@...com>, Song Liu <songliubraving@...com>,
Yonghong Song <yhs@...com>, John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
Network Development <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>
Subject: Re: Possible kernel memory leak in bpf_timer
On Wed, 27 Sept 2023 at 07:32, Hsin-Wei Hung <hsinweih@....edu> wrote:
>
> Hi,
>
> We found a potential memory leak in bpf_timer in v5.15.26 using a
> customized syzkaller for fuzzing bpf runtime. It can happen when
> an arraymap is being released. An entry that has been checked by
> bpf_timer_cancel_and_free() can again be initialized by bpf_timer_init().
> Since both paths are almost identical between v5.15 and net-next,
> I suspect this problem still exists. Below are kmemleak report and
> some additional printks I inserted.
>
> [ 1364.081694] array_map_free_timers map:0xffffc900005a9000
> [ 1364.081730] ____bpf_timer_init map:0xffffc900005a9000
> timer:0xffff888001ab4080
>
> *no bpf_timer_cancel_and_free that will kfree struct bpf_hrtimer*
> at 0xffff888001ab4080 is called
>
> [ 1383.907869] kmemleak: 1 new suspected memory leaks (see
> /sys/kernel/debug/kmemleak)
> BUG: memory leak
> unreferenced object 0xffff888001ab4080 (size 96):
> comm "sshd", pid 279, jiffies 4295233126 (age 29.952s)
> hex dump (first 32 bytes):
> 80 40 ab 01 80 88 ff ff 00 00 00 00 00 00 00 00 .@..............
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> backtrace:
> [<000000009d018da0>] bpf_map_kmalloc_node+0x89/0x1a0
> [<00000000ebcb33fc>] bpf_timer_init+0x177/0x320
> [<00000000fb7e90bf>] 0xffffffffc02a0358
> [<000000000c89ec4f>] __cgroup_bpf_run_filter_skb+0xcbf/0x1110
> [<00000000fd663fc0>] ip_finish_output+0x13d/0x1f0
> [<00000000acb3205c>] ip_output+0x19b/0x310
> [<000000006b584375>] __ip_queue_xmit+0x182e/0x1ed0
> [<00000000b921b07e>] __tcp_transmit_skb+0x2b65/0x37f0
> [<0000000026104b23>] tcp_write_xmit+0xf19/0x6290
> [<000000006dc71bc5>] __tcp_push_pending_frames+0xaf/0x390
> [<00000000251b364a>] tcp_push+0x452/0x6d0
> [<000000008522b7d3>] tcp_sendmsg_locked+0x2567/0x3030
> [<0000000038c644d2>] tcp_sendmsg+0x30/0x50
> [<000000009fe3413f>] inet_sendmsg+0xba/0x140
> [<0000000034d78039>] sock_sendmsg+0x13d/0x190
> [<00000000f55b8db6>] sock_write_iter+0x296/0x3d0
>
>
Does this happen on bpf-next? Things have changed around timer freeing
since then.
Or even sharing the reproducer for this will work. I can take a look.
Thanks
Powered by blists - more mailing lists