[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170810155138.nuhzm47zrp3ovnzv@hirez.programming.kicks-ass.net>
Date: Thu, 10 Aug 2017 17:51:38 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: shuwang@...hat.com
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org, chuhu@...hat.com,
liwang@...hat.com
Subject: Re: [PATCH 1/1] sched/topology: fix memleak in __sdt_alloc()
On Thu, Aug 10, 2017 at 03:52:16PM +0800, shuwang@...hat.com wrote:
> From: Shu Wang <shuwang@...hat.com>
>
> Found this issue by kmemleak. the sg and sgc from __sdt_alloc()
> might be leaked as each domain holds many groups' ref. And in
> destroy_sched_domain(), it only declined the first group ref.
>
> Online and offline a cpu can trigger this leak, and cause OOM.
> reproducer for my 6 cpus machine:
> while true
> do
> echo 0 > /sys/devices/system/cpu/cpu5/online;
> echo 1 > /sys/devices/system/cpu/cpu5/online;
> done
>
> unreferenced object 0xffff88007d772a80 (size 64):
> comm "cpuhp/5", pid 39, jiffies 4294719962 (age 35.251s)
> hex dump (first 32 bytes):
> c0 22 77 7d 00 88 ff ff 02 00 00 00 01 00 00 00 ."w}............
> 40 2a 77 7d 00 88 ff ff 00 00 00 00 00 00 00 00 @*w}............
> backtrace:
> [<ffffffff8176525a>] kmemleak_alloc+0x4a/0xa0
> [<ffffffff8121efe1>] __kmalloc_node+0xf1/0x280
> [<ffffffff810d94a8>] build_sched_domains+0x1e8/0xf20
> [<ffffffff810da674>] partition_sched_domains+0x304/0x360
> [<ffffffff81139557>] cpuset_update_active_cpus+0x17/0x40
> [<ffffffff810bdb2e>] sched_cpu_activate+0xae/0xc0
> [<ffffffff810900e0>] cpuhp_invoke_callback+0x90/0x400
> [<ffffffff81090597>] cpuhp_up_callbacks+0x37/0xb0
> [<ffffffff81090887>] cpuhp_thread_fun+0xd7/0xf0
> [<ffffffff810b37e0>] smpboot_thread_fn+0x110/0x160
> [<ffffffff810af5d9>] kthread+0x109/0x140
> [<ffffffff81770e45>] ret_from_fork+0x25/0x30
> [<ffffffffffffffff>] 0xffffffffffffffff
> unreferenced object 0xffff88007d772a40 (size 64):
> comm "cpuhp/5", pid 39, jiffies 4294719962 (age 35.251s)
> hex dump (first 32 bytes):
> 03 00 00 00 00 00 00 00 00 04 00 00 00 00 00 00 ................
> 00 04 00 00 00 00 00 00 4f 3c fc ff 00 00 00 00 ........O<......
> backtrace:
> [<ffffffff8176525a>] kmemleak_alloc+0x4a/0xa0
> [<ffffffff8121efe1>] __kmalloc_node+0xf1/0x280
> [<ffffffff810da16d>] build_sched_domains+0xead/0xf20
> [<ffffffff810da674>] partition_sched_domains+0x304/0x360
> [<ffffffff81139557>] cpuset_update_active_cpus+0x17/0x40
> [<ffffffff810bdb2e>] sched_cpu_activate+0xae/0xc0
> [<ffffffff810900e0>] cpuhp_invoke_callback+0x90/0x400
> [<ffffffff81090597>] cpuhp_up_callbacks+0x37/0xb0
> [<ffffffff81090887>] cpuhp_thread_fun+0xd7/0xf0
> [<ffffffff810b37e0>] smpboot_thread_fn+0x110/0x160
> [<ffffffff810af5d9>] kthread+0x109/0x140
> [<ffffffff81770e45>] ret_from_fork+0x25/0x30
> [<ffffffffffffffff>] 0xffffffffffffffff
>
> Reported-by: Chunyu Hu <chuhu@...hat.com>
> Signed-off-by: Chunyu Hu <chuhu@...hat.com>
> Signed-off-by: Shu Wang <shuwang@...hat.com>
Thanks!
Powered by blists - more mailing lists