[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d09fb32e-ca76-4453-9f27-670ba1557da6@suse.cz>
Date: Fri, 8 Nov 2024 17:00:10 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Adrian Huang <adrianhuang0701@...il.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Raghavendra K T <raghavendra.kt@....com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Adrian Huang <ahuang12@...ovo.com>,
Jiwei Sun <sunjw10@...ovo.com>
Subject: Re: [PATCH 1/1] sched/numa: Fix memory leak due to the overwritten
vma->numab_state
On 11/8/24 14:31, Adrian Huang wrote:
> From: Adrian Huang <ahuang12@...ovo.com>
>
> [Problem Description]
> When running the hackbench program of LTP, the following memory leak is
> reported by kmemleak.
>
> # /opt/ltp/testcases/bin/hackbench 20 thread 1000
> Running with 20*40 (== 800) tasks.
>
> # dmesg | grep kmemleak
> ...
> kmemleak: 480 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
> kmemleak: 665 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
>
> # cat /sys/kernel/debug/kmemleak
> unreferenced object 0xffff888cd8ca2c40 (size 64):
> comm "hackbench", pid 17142, jiffies 4299780315
> hex dump (first 32 bytes):
> ac 74 49 00 01 00 00 00 4c 84 49 00 01 00 00 00 .tI.....L.I.....
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
> backtrace (crc bff18fd4):
> [<ffffffff81419a89>] __kmalloc_cache_noprof+0x2f9/0x3f0
> [<ffffffff8113f715>] task_numa_work+0x725/0xa00
> [<ffffffff8110f878>] task_work_run+0x58/0x90
> [<ffffffff81ddd9f8>] syscall_exit_to_user_mode+0x1c8/0x1e0
> [<ffffffff81dd78d5>] do_syscall_64+0x85/0x150
> [<ffffffff81e0012b>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> ...
>
> This issue can be consistently reproduced on three different servers:
> * a 448-core server
> * a 256-core server
> * a 192-core server
>
> [Root Cause]
> Since multiple threads are created by the hackbench program (along with
> the command argument 'thread'), a shared vma might be accessed by two or
> more cores simultaneously. When two or more cores observe that
> vma->numab_state is NULL at the same time, vma->numab_state will be
> overwritten.
>
> Note that the command `/opt/ltp/testcases/bin/hackbench 50 process 1000`
> cannot the reproduce the issue because of the fork() and COW. It is
> verified with 200+ test runs.
>
> [Solution]
> Introduce a lock to make sure the atomic operation of the vma->numab_state
> access.
>
> Fixes: ef6a22b70f6d ("sched/numa: apply the scan delay to every new vma")
> Reported-by: Jiwei Sun <sunjw10@...ovo.com>
> Signed-off-by: Adrian Huang <ahuang12@...ovo.com>
Could this be achieved without the new lock, by a cmpxchg attempt to install
vma->numab_state that will free the allocated vma_numab_state if it fails?
Thanks,
Vlastimil
> ---
> include/linux/mm.h | 1 +
> include/linux/mm_types.h | 1 +
> kernel/sched/fair.c | 17 ++++++++++++++++-
> 3 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 61fff5d34ed5..a08e31ac53de 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -673,6 +673,7 @@ struct vm_operations_struct {
> static inline void vma_numab_state_init(struct vm_area_struct *vma)
> {
> vma->numab_state = NULL;
> + mutex_init(&vma->numab_state_lock);
> }
> static inline void vma_numab_state_free(struct vm_area_struct *vma)
> {
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6e3bdf8e38bc..77eee89a89f5 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -768,6 +768,7 @@ struct vm_area_struct {
> #endif
> #ifdef CONFIG_NUMA_BALANCING
> struct vma_numab_state *numab_state; /* NUMA Balancing state */
> + struct mutex numab_state_lock; /* NUMA Balancing state lock */
> #endif
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> } __randomize_layout;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c157d4860a3b..53e6383cd94e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3397,12 +3397,24 @@ static void task_numa_work(struct callback_head *work)
> continue;
> }
>
> + /*
> + * In case of the shared vma, the vma->numab_state will be
> + * overwritten if two or more cores observe vma->numab_state
> + * is NULL at the same time. Make sure that only one core
> + * allocates memory for vma->numab_state. This can prevent
> + * the memory leak.
> + */
> + if (!mutex_trylock(&vma->numab_state_lock))
> + continue;
> +
> /* Initialise new per-VMA NUMAB state. */
> if (!vma->numab_state) {
> vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
> GFP_KERNEL);
> - if (!vma->numab_state)
> + if (!vma->numab_state) {
> + mutex_unlock(&vma->numab_state_lock);
> continue;
> + }
>
> vma->numab_state->start_scan_seq = mm->numa_scan_seq;
>
> @@ -3428,6 +3440,7 @@ static void task_numa_work(struct callback_head *work)
> if (mm->numa_scan_seq && time_before(jiffies,
> vma->numab_state->next_scan)) {
> trace_sched_skip_vma_numa(mm, vma, NUMAB_SKIP_SCAN_DELAY);
> + mutex_unlock(&vma->numab_state_lock);
> continue;
> }
>
> @@ -3440,6 +3453,8 @@ static void task_numa_work(struct callback_head *work)
> vma->numab_state->pids_active[1] = 0;
> }
>
> + mutex_unlock(&vma->numab_state_lock);
> +
> /* Do not rescan VMAs twice within the same sequence. */
> if (vma->numab_state->prev_scan_seq == mm->numa_scan_seq) {
> mm->numa_scan_offset = vma->vm_end;
Powered by blists - more mailing lists