lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADUfDZrcbU5ABbLs6SBCHjKwfGtAnpcMtJaLQc7BTKNbG4RJ0A@mail.gmail.com>
Date: Wed, 17 Dec 2025 13:22:48 -0800
From: Caleb Sander Mateos <csander@...estorage.com>
To: veygax <veyga@...gax.dev>
Cc: Jens Axboe <axboe@...nel.dk>, "io-uring@...r.kernel.org" <io-uring@...r.kernel.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] io_uring/rsrc: fix slab-out-of-bounds in io_buffer_register_bvec

On Wed, Dec 17, 2025 at 1:04 PM veygax <veyga@...gax.dev> wrote:
>
> From: Evan Lambert <veyga@...gax.dev>
>
> The function io_buffer_register_bvec() calculates the allocation size
> for the io_mapped_ubuf based on blk_rq_nr_phys_segments(rq). This
> function calculates the number of scatter-gather elements after megine

"merging"?

> physically contiguous pages.
>
> However, the subsequent loop uses rq_for_each_bvec() to populate the
> array, which iterates over every individual bio_vec in the request,
> regardless of physical contiguity.

Hmm, I would have thought that physically contiguous bio_vecs would
have been merged by the block layer? But that's definitely beyond my
expertise.

>
> If a request has multiple bio_vec entries that are physically
> contiguous, blk_rq_nr_phys_segments() returns a value smaller than
> the total number of bio_vecs. This leads to a slab-out-of-bounds write.
>
> The path is reachable from userspace via the ublk driver when a server
> issues a UBLK_IO_REGISTER_IO_BUF command. This requires the
> UBLK_F_SUPPORT_ZERO_COPY flag which is protected by CAP_NET_ADMIN.

"CAP_SYS_ADMIN"?

>
> Fix this by calculating the total number of bio_vecs by iterating
> over the request's bios and summing their bi_vcnt.
>
> KASAN report:
>
> [18:01:50] BUG: KASAN: slab-out-of-bounds in io_buffer_register_bvec+0x813/0xb80
> [18:01:50] Write of size 8 at addr ffff88800223b238 by task kunit_try_catch/27
> [18:01:50]
> [18:01:50] CPU: 0 UID: 0 PID: 27 Comm: kunit_try_catch Tainted: G                 N  6.19.0-rc1-g346af1a0c65a-dirty #44 PREEMPT(none)
> [18:01:50] Tainted: [N]=TEST
> [18:01:50] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.1 11/11/2019
> [18:01:50] Call Trace:
> [18:01:50]  <TASK>
> [18:01:50]  dump_stack_lvl+0x4d/0x70
> [18:01:50]  print_report+0x151/0x4c0
> [18:01:50]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> [18:01:50]  ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50]  kasan_report+0xec/0x120
> [18:01:50]  ? io_buffer_register_bvec+0x813/0xb80
> [18:01:50]  io_buffer_register_bvec+0x813/0xb80
> [18:01:50]  io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50]  ? __pfx_io_buffer_register_bvec_overflow_test+0x10/0x10
> [18:01:50]  ? __pfx_pick_next_task_fair+0x10/0x10
> [18:01:50]  ? _raw_spin_lock+0x7e/0xd0
> [18:01:50]  ? finish_task_switch.isra.0+0x19a/0x650
> [18:01:50]  ? __pfx_read_tsc+0x10/0x10
> [18:01:50]  ? ktime_get_ts64+0x79/0x240
> [18:01:50]  kunit_try_run_case+0x19b/0x2c0

This doesn't look like an actual ublk zero-copy buffer registration.
Where does the struct request come from?

> [18:01:50]  ? __pfx_kunit_try_run_case+0x10/0x10
> [18:01:50]  ? __pfx_kunit_generic_run_threadfn_adapter+0x10/0x10
> [18:01:50]  kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50]  kthread+0x323/0x670
> [18:01:50]  ? __pfx_kthread+0x10/0x10
> [18:01:50]  ? __pfx__raw_spin_lock_irq+0x10/0x10
> [18:01:50]  ? __pfx_kthread+0x10/0x10
> [18:01:50]  ret_from_fork+0x329/0x420
> [18:01:50]  ? __pfx_ret_from_fork+0x10/0x10
> [18:01:50]  ? __switch_to+0xa0f/0xd40
> [18:01:50]  ? __pfx_kthread+0x10/0x10
> [18:01:50]  ret_from_fork_asm+0x1a/0x30
> [18:01:50]  </TASK>
> [18:01:50]
> [18:01:50] Allocated by task 27:
> [18:01:50]  kasan_save_stack+0x30/0x50
> [18:01:50]  kasan_save_track+0x14/0x30
> [18:01:50]  __kasan_kmalloc+0x7f/0x90
> [18:01:50]  io_cache_alloc_new+0x35/0xc0
> [18:01:50]  io_buffer_register_bvec+0x196/0xb80
> [18:01:50]  io_buffer_register_bvec_overflow_test+0x4e6/0x9b0
> [18:01:50]  kunit_try_run_case+0x19b/0x2c0
> [18:01:50]  kunit_generic_run_threadfn_adapter+0x80/0xf0
> [18:01:50]  kthread+0x323/0x670
> [18:01:50]  ret_from_fork+0x329/0x420
> [18:01:50]  ret_from_fork_asm+0x1a/0x30
> [18:01:50]
> [18:01:50] The buggy address belongs to the object at ffff88800223b000
> [18:01:50]  which belongs to the cache kmalloc-1k of size 1024
> [18:01:50] The buggy address is located 0 bytes to the right of
> [18:01:50]  allocated 568-byte region [ffff88800223b000, ffff88800223b238)
> [18:01:50]
> [18:01:50] The buggy address belongs to the physical page:
> [18:01:50] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2238
> [18:01:50] head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [18:01:50] flags: 0x4000000000000040(head|zone=1)
> [18:01:50] page_type: f5(slab)
> [18:01:50] raw: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] raw: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000040 ffff888001041dc0 dead000000000122 0000000000000000
> [18:01:50] head: 0000000000000000 0000000080080008 00000000f5000000 0000000000000000
> [18:01:50] head: 4000000000000002 ffffea0000088e01 00000000ffffffff 00000000ffffffff
> [18:01:50] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [18:01:50] page dumped because: kasan: bad access detected
> [18:01:50]
> [18:01:50] Memory state around the buggy address:
> [18:01:50]  ffff88800223b100: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50]  ffff88800223b180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> [18:01:50] >ffff88800223b200: 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc fc
> [18:01:50]                                         ^
> [18:01:50]  ffff88800223b280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50]  ffff88800223b300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [18:01:50] ==================================================================
> [18:01:50] Disabling lock debugging due to kernel taint
>
> Fixes: 27cb27b6d5ea ("io_uring: add support for kernel registered bvecs")
> Signed-off-by: Evan Lambert <veyga@...gax.dev>
> ---
>  io_uring/rsrc.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
> index a63474b331bf..7602b71543e0 100644
> --- a/io_uring/rsrc.c
> +++ b/io_uring/rsrc.c
> @@ -946,6 +946,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
>         struct io_mapped_ubuf *imu;
>         struct io_rsrc_node *node;
>         struct bio_vec bv;
> +       struct bio *bio;
>         unsigned int nr_bvecs = 0;
>         int ret = 0;
>
> @@ -967,11 +968,10 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
>                 goto unlock;
>         }
>
> -       /*
> -        * blk_rq_nr_phys_segments() may overestimate the number of bvecs
> -        * but avoids needing to iterate over the bvecs
> -        */
> -       imu = io_alloc_imu(ctx, blk_rq_nr_phys_segments(rq));
> +       __rq_for_each_bio(bio, rq)
> +               nr_bvecs += bio->bi_vcnt;
> +
> +       imu = io_alloc_imu(ctx, nr_bvecs);
>         if (!imu) {
>                 kfree(node);
>                 ret = -ENOMEM;
> @@ -988,6 +988,7 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd, struct request *rq,
>         imu->is_kbuf = true;
>         imu->dir = 1 << rq_data_dir(rq);
>
> +       nr_bvecs = 0;
>         rq_for_each_bvec(bv, rq, rq_iter)
>                 imu->bvec[nr_bvecs++] = bv;

Could alternatively check for mergability with the previous bvec here.
That would avoid needing to allocate extra memory for physically
contiguous bvecs.

Best,
Caleb

>         imu->nr_bvecs = nr_bvecs;
> --
> 2.52.0
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ