[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQJK5mPOB7B4KBa6q1NRYVQx1Eya5mtNb6=L0p-BaCxX=w@mail.gmail.com>
Date: Wed, 29 Dec 2021 18:23:20 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: butt3rflyh4ck <butterflyhuangxx@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
Networking <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: A slab-out-of-bounds Read bug in __htab_map_lookup_and_delete_batch
On Wed, Dec 29, 2021 at 2:10 AM butt3rflyh4ck
<butterflyhuangxx@...il.com> wrote:
>
> Hi, there is a slab-out-bounds Read bug in
> __htab_map_lookup_and_delete_batch in kernel/bpf/hashtab.c
> and I reproduce it in linux-5.16.rc7(upstream) and latest linux-5.15.11.
>
> #carsh log
> [ 166.945208][ T6897]
> ==================================================================
> [ 166.947075][ T6897] BUG: KASAN: slab-out-of-bounds in _copy_to_user+0x87/0xb0
> [ 166.948612][ T6897] Read of size 49 at addr ffff88801913f800 by
> task __htab_map_look/6897
> [ 166.950406][ T6897]
> [ 166.950890][ T6897] CPU: 1 PID: 6897 Comm: __htab_map_look Not
> tainted 5.16.0-rc7+ #30
> [ 166.952521][ T6897] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.13.0-1ubuntu1 04/01/2014
> [ 166.954562][ T6897] Call Trace:
> [ 166.955268][ T6897] <TASK>
> [ 166.955918][ T6897] dump_stack_lvl+0x57/0x7d
> [ 166.956875][ T6897] print_address_description.constprop.0.cold+0x93/0x347
> [ 166.958411][ T6897] ? _copy_to_user+0x87/0xb0
> [ 166.959356][ T6897] ? _copy_to_user+0x87/0xb0
> [ 166.960272][ T6897] kasan_report.cold+0x83/0xdf
> [ 166.961196][ T6897] ? _copy_to_user+0x87/0xb0
> [ 166.962053][ T6897] kasan_check_range+0x13b/0x190
> [ 166.962978][ T6897] _copy_to_user+0x87/0xb0
> [ 166.964340][ T6897] __htab_map_lookup_and_delete_batch+0xdc2/0x1590
> [ 166.965619][ T6897] ? htab_lru_map_update_elem+0xe70/0xe70
> [ 166.966732][ T6897] bpf_map_do_batch+0x1fa/0x460
> [ 166.967619][ T6897] __sys_bpf+0x99a/0x3860
> [ 166.968443][ T6897] ? bpf_link_get_from_fd+0xd0/0xd0
> [ 166.969393][ T6897] ? rcu_read_lock_sched_held+0x9c/0xd0
> [ 166.970425][ T6897] ? lock_acquire+0x1ab/0x520
> [ 166.971284][ T6897] ? find_held_lock+0x2d/0x110
> [ 166.972208][ T6897] ? rcu_read_lock_sched_held+0x9c/0xd0
> [ 166.973139][ T6897] ? rcu_read_lock_bh_held+0xb0/0xb0
> [ 166.974096][ T6897] __x64_sys_bpf+0x70/0xb0
> [ 166.974903][ T6897] ? syscall_enter_from_user_mode+0x21/0x70
> [ 166.976077][ T6897] do_syscall_64+0x35/0xb0
> [ 166.976889][ T6897] entry_SYSCALL_64_after_hwframe+0x44/0xae
> [ 166.978027][ T6897] RIP: 0033:0x450f0d
>
>
> In hashtable, if the elements' keys have the same jhash() value, the
> elements will be put into the same bucket.
> By putting a lot of elements into a single bucket, the value of
> bucket_size can be increased to overflow.
> but also we can increase bucket_cnt to out of bound Read.
Can you be more specific?
If you can send a patch with a fix it would be even better.
> the out of bound Read in __htab_map_lookup_and_delete_batch code:
> ```
> ...
> if (bucket_cnt && (copy_to_user(ukeys + total * key_size, keys,
> key_size * bucket_cnt) ||
> copy_to_user(uvalues + total * value_size, values,
> value_size * bucket_cnt))) {
> ret = -EFAULT;
> goto after_loop;
> }
> ...
> ```
>
> Regards,
> butt3rflyh4ck.
>
>
> --
> Active Defense Lab of Venustech
Powered by blists - more mailing lists