lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJndMkNh4X-w0520B8PVN122h8XKQxE4g4LmDTKyWd=0Q@mail.gmail.com>
Date: Wed, 15 May 2024 14:30:37 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Ubisectech Sirius <bugreport@...sectech.com>, Jiri Olsa <jolsa@...nel.org>
Cc: linux-trace-kernel <linux-trace-kernel@...r.kernel.org>, 
	linux-kernel <linux-kernel@...r.kernel.org>, ast <ast@...nel.org>, 
	daniel <daniel@...earbox.net>, andrii <andrii@...nel.org>
Subject: Re: WARNING: kmalloc bug in bpf_uprobe_multi_link_attach

On Tue, May 14, 2024 at 12:33 AM Ubisectech Sirius
<bugreport@...sectech.com> wrote:
>
> Hello.
> We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec. Recently, our team has discovered a issue in Linux kernel 6.7.  Attached to the email were a PoC file of the issue.

Jiri,

please take a look.

> Stack dump:
>
> loop3: detected capacity change from 0 to 8
> MTD: Attempt to mount non-MTD device "/dev/loop3"
> ------------[ cut here ]------------
> WARNING: CPU: 1 PID: 10075 at mm/util.c:632 kvmalloc_node+0x199/0x1b0 mm/util.c:632
> Modules linked in:
> CPU: 1 PID: 10075 Comm: syz-executor.3 Not tainted 6.7.0 #2
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
> RIP: 0010:kvmalloc_node+0x199/0x1b0 mm/util.c:632
> Code: 02 1d 00 eb aa e8 a7 49 c6 ff 41 81 e5 00 20 00 00 31 ff 44 89 ee e8 36 45 c6 ff 45 85 ed 0f 85 1b ff ff ff e8 88 49 c6 ff 90 <0f> 0b 90 e9 dd fe ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40
> RSP: 0018:ffffc90002007b60 EFLAGS: 00010212
> RAX: 00000000000023e4 RBX: 0000000000000400 RCX: ffffc90003aaa000
> RDX: 0000000000040000 RSI: ffffffff81c3acc8 RDI: 0000000000000005
> RBP: 00000037ffffcec8 R08: 0000000000000005 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> R13: 0000000000000000 R14: 00000000ffffffff R15: ffff88805ff6e1b8
> FS:  00007fc62205f640(0000) GS:ffff88807ec00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000001b2e026000 CR3: 000000005f338000 CR4: 0000000000750ef0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> PKRU: 55555554
> Call Trace:
>  <TASK>
>  kvmalloc include/linux/slab.h:738 [inline]
>  kvmalloc_array include/linux/slab.h:756 [inline]
>  kvcalloc include/linux/slab.h:761 [inline]
>  bpf_uprobe_multi_link_attach+0x3fe/0xf60 kernel/trace/bpf_trace.c:3239
>  link_create kernel/bpf/syscall.c:5012 [inline]
>  __sys_bpf+0x2e85/0x4e00 kernel/bpf/syscall.c:5453
>  __do_sys_bpf kernel/bpf/syscall.c:5487 [inline]
>  __se_sys_bpf kernel/bpf/syscall.c:5485 [inline]
>  __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5485
>  do_syscall_x64 arch/x86/entry/common.c:52 [inline]
>  do_syscall_64+0x43/0x120 arch/x86/entry/common.c:83
>  entry_SYSCALL_64_after_hwframe+0x6f/0x77
> RIP: 0033:0x7fc62128fd6d
> Code: c3 e8 97 2b 00 00 0f 1f 80 00 00 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
> RSP: 002b:00007fc62205f028 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
> RAX: ffffffffffffffda RBX: 00007fc6213cbf80 RCX: 00007fc62128fd6d
> RDX: 0000000000000040 RSI: 00000000200001c0 RDI: 000000000000001c
> RBP: 00007fc6212f14cd R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 000000000000000b R14: 00007fc6213cbf80 R15: 00007fc62203f000
>  </TASK>
>
> Thank you for taking the time to read this email and we look forward to working with you further.
>
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ