lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3727258a-7caf-4f05-b8a9-20ab82ee4ea0@suse.cz>
Date: Fri, 15 Nov 2024 12:02:27 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Raghavendra K T <raghavendra.kt@....com>,
 Adrian Huang <adrianhuang0701@...il.com>, Ingo Molnar <mingo@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
 Andrew Morton <akpm@...ux-foundation.org>, linux-kernel@...r.kernel.org,
 Adrian Huang <ahuang12@...ovo.com>, Jiwei Sun <sunjw10@...ovo.com>,
 "linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [PATCH v2 1/1] sched/numa: Fix memory leak due to the overwritten
 vma->numab_state

On 11/15/24 11:45, Raghavendra K T wrote:
> + Vlastimil
> 
> Looks like he was unintentionally missed in CC. He has added Reviewed-by 
> to V1

Thanks, seems I did it for v1 when v2 was already sent, and without also cc
linux-mm I didn't notice it was.

> On 11/13/2024 3:51 PM, Adrian Huang wrote:
>> From: Adrian Huang <ahuang12@...ovo.com>
>> 
>> [Problem Description]
>> When running the hackbench program of LTP, the following memory leak is
>> reported by kmemleak.
>> 
>>    # /opt/ltp/testcases/bin/hackbench 20 thread 1000
>>    Running with 20*40 (== 800) tasks.
>> 
>>    # dmesg | grep kmemleak
>>    ...
>>    kmemleak: 480 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
>>    kmemleak: 665 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
>> 
>>    # cat /sys/kernel/debug/kmemleak
>>    unreferenced object 0xffff888cd8ca2c40 (size 64):
>>      comm "hackbench", pid 17142, jiffies 4299780315
>>      hex dump (first 32 bytes):
>>        ac 74 49 00 01 00 00 00 4c 84 49 00 01 00 00 00  .tI.....L.I.....
>>        00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
>>      backtrace (crc bff18fd4):
>>        [<ffffffff81419a89>] __kmalloc_cache_noprof+0x2f9/0x3f0
>>        [<ffffffff8113f715>] task_numa_work+0x725/0xa00
>>        [<ffffffff8110f878>] task_work_run+0x58/0x90
>>        [<ffffffff81ddd9f8>] syscall_exit_to_user_mode+0x1c8/0x1e0
>>        [<ffffffff81dd78d5>] do_syscall_64+0x85/0x150
>>        [<ffffffff81e0012b>] entry_SYSCALL_64_after_hwframe+0x76/0x7e
>>    ...
>> 
>> This issue can be consistently reproduced on three different servers:
>>    * a 448-core server
>>    * a 256-core server
>>    * a 192-core server
>> 
>> [Root Cause]
>> Since multiple threads are created by the hackbench program (along with
>> the command argument 'thread'), a shared vma might be accessed by two or
>> more cores simultaneously. When two or more cores observe that
>> vma->numab_state is NULL at the same time, vma->numab_state will be
>> overwritten.
>> 
>> Although current code ensures that only one thread scans the VMAs in a
>> single 'numa_scan_period', there might be a chance for another thread
>> to enter in the next 'numa_scan_period' while we have not gotten till
>> numab_state allocation [1].
>> 
>> Note that the command `/opt/ltp/testcases/bin/hackbench 50 process 1000`
>> cannot the reproduce the issue. It is verified with 200+ test runs.
>> 
>> [Solution]
>> Use the cmpxchg atomic operation to ensure that only one thread executes
>> the vma->numab_state assignment.
>> 
>> [1] https://lore.kernel.org/lkml/1794be3c-358c-4cdc-a43d-a1f841d91ef7@amd.com/
>> 
>> Fixes: ef6a22b70f6d ("sched/numa: apply the scan delay to every new vma")
>> Reported-by: Jiwei Sun <sunjw10@...ovo.com>
>> Signed-off-by: Adrian Huang <ahuang12@...ovo.com>
>> Reviewed-by: Raghavendra K T <raghavendra.kt@....com>

Reviewed-by: Vlastimil Babka <vbabka@...e.cz>

>> ---
>>   kernel/sched/fair.c | 12 +++++++++---
>>   1 file changed, 9 insertions(+), 3 deletions(-)
>> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 3356315d7e64..7f99df294583 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3399,10 +3399,16 @@ static void task_numa_work(struct callback_head *work)
>>   
>>   		/* Initialise new per-VMA NUMAB state. */
>>   		if (!vma->numab_state) {
>> -			vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
>> -				GFP_KERNEL);
>> -			if (!vma->numab_state)
>> +			struct vma_numab_state *ptr;
>> +
>> +			ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
>> +			if (!ptr)
>> +				continue;
>> +
>> +			if (cmpxchg(&vma->numab_state, NULL, ptr)) {
>> +				kfree(ptr);
>>   				continue;
>> +			}
>>   
>>   			vma->numab_state->start_scan_seq = mm->numa_scan_seq;
>>   


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ