lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAPhsuW7mK3J6st9PG2tu6EjTmPK7cUqXX1JMRyvhKo8ehFWi+Q@mail.gmail.com>
Date: Fri, 8 Mar 2024 10:16:25 -0800
From: Song Liu <song@...nel.org>
To: Coly Li <colyli@...e.de>
Cc: Yu Kuai <yukuai1@...weicloud.com>, yukuai3@...wei.com, xueshi.hu@...rtx.com, 
	linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org, yi.zhang@...wei.com, 
	yangerkun@...wei.com
Subject: Re: [PATCH] raid1: fix use-after-free for original bio in raid1_write_request()

On Fri, Mar 8, 2024 at 6:29 AM Coly Li <colyli@...e.de> wrote:
>
> On Fri, Mar 08, 2024 at 05:37:26PM +0800, Yu Kuai wrote:
> > From: Yu Kuai <yukuai3@...wei.com>
> >
> > r1_bio->bios[] is used to record new bios that will be issued to
> > underlying disks, however, in raid1_write_request(), r1_bio->bios[]
> > will set to the original bio temporarily. Meanwhile, if blocked rdev
> > is set, free_r1bio() will be called causing that all r1_bio->bios[]
> > to be freed:
> >
> > raid1_write_request()
> >  r1_bio = alloc_r1bio(mddev, bio); -> r1_bio->bios[] is NULL
> >  for (i = 0;  i < disks; i++) -> for each rdev in conf
> >   // first rdev is normal
> >   r1_bio->bios[0] = bio; -> set to original bio
> >   // second rdev is blocked
> >   if (test_bit(Blocked, &rdev->flags))
> >    break
> >
> >  if (blocked_rdev)
> >   free_r1bio()
> >    put_all_bios()
> >     bio_put(r1_bio->bios[0]) -> original bio is freed
> >
> > Test scripts:
> >
> > mdadm -CR /dev/md0 -l1 -n4 /dev/sd[abcd] --assume-clean
> > fio -filename=/dev/md0 -ioengine=libaio -rw=write -bs=4k -numjobs=1 \
> >     -iodepth=128 -name=test -direct=1
> > echo blocked > /sys/block/md0/md/rd2/state
> >
> > Test result:
> >
> > BUG bio-264 (Not tainted): Object already free
> > -----------------------------------------------------------------------------
> >
> > Allocated in mempool_alloc_slab+0x24/0x50 age=1 cpu=1 pid=869
> >  kmem_cache_alloc+0x324/0x480
> >  mempool_alloc_slab+0x24/0x50
> >  mempool_alloc+0x6e/0x220
> >  bio_alloc_bioset+0x1af/0x4d0
> >  blkdev_direct_IO+0x164/0x8a0
> >  blkdev_write_iter+0x309/0x440
> >  aio_write+0x139/0x2f0
> >  io_submit_one+0x5ca/0xb70
> >  __do_sys_io_submit+0x86/0x270
> >  __x64_sys_io_submit+0x22/0x30
> >  do_syscall_64+0xb1/0x210
> >  entry_SYSCALL_64_after_hwframe+0x6c/0x74
> > Freed in mempool_free_slab+0x1f/0x30 age=1 cpu=1 pid=869
> >  kmem_cache_free+0x28c/0x550
> >  mempool_free_slab+0x1f/0x30
> >  mempool_free+0x40/0x100
> >  bio_free+0x59/0x80
> >  bio_put+0xf0/0x220
> >  free_r1bio+0x74/0xb0
> >  raid1_make_request+0xadf/0x1150
> >  md_handle_request+0xc7/0x3b0
> >  md_submit_bio+0x76/0x130
> >  __submit_bio+0xd8/0x1d0
> >  submit_bio_noacct_nocheck+0x1eb/0x5c0
> >  submit_bio_noacct+0x169/0xd40
> >  submit_bio+0xee/0x1d0
> >  blkdev_direct_IO+0x322/0x8a0
> >  blkdev_write_iter+0x309/0x440
> >  aio_write+0x139/0x2f0
> >
> > Since that bios for underlying disks are not allocated yet, fix this
> > problem by using mempool_free() directly to free the r1_bio.
> >
>
> Yes, the panic doesn't show up anymore with this patch.
>
> Thanks for the fixup.
>
>
> > Fixes: 992db13a4aee ("md/raid1: free the r1bio before waiting for blocked rdev")
> > Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>
> Reported-and-tested-by: Coly Li <colyli@...e.de>

Applied to md-6.9. Thanks!

Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ