lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAH+dOx+pfCXJ-zCb-CzfHevD_J8K5DHNEuvK0_KvQtk=xVLfdA@mail.gmail.com>
Date:	Thu, 24 Nov 2011 15:52:50 -0800
From:	Kent Overstreet <koverstreet@...gle.com>
To:	"Ted Ts'o" <tytso@....edu>, Tejun Heo <tj@...nel.org>,
	Andreas Dilger <adilger.kernel@...ger.ca>,
	linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
	Kent Overstreet <koverstreet@...gle.com>, rickyb@...gle.com,
	aberkan@...gle.com
Subject: Re: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()

Heh. It took me about 2 seconds to trigger it in vm :)

One reason it triggered so fast is that my VM test setup runs
everything out of ram (the disks on the host are files in a tmpfs),
but the main reason we were hitting it is that bcache usually runs the
bio->bi_endio function out of a workqueue, not irq context.

It also seems to only trigger when a dio write is extending a file;
the same test setup run against an existing file doesn't ever cause
(visible) slab corruption.

Do you think this would also explain the corruption D is seeing in vd?
I haven't yet figured out a mechanism but the bug seems to fit.

On Thu, Nov 24, 2011 at 3:18 PM, Ted Ts'o <tytso@....edu> wrote:
> On Thu, Nov 24, 2011 at 11:46:26AM -0800, Tejun Heo wrote:
>> ext4_end_io_dio() queues io_end->work and then clears iocb->private;
>> however, io_end->work completes the iocb by calling aio_complete(),
>> which may happen before io_end->work clearing thus leading to
>> use-after-free.
>>
>> Detected and tested with slab poisoning.
>>
>> Signed-off-by: Tejun Heo <tj@...nel.org>
>> Reported-by: Kent Overstreet <koverstreet@...gle.com>
>> Tested-by: Kent Overstreet <koverstreet@...gle.com>
>> Cc: stable@...nel.org
>
> Thanks!!  I've been trying to track down this bug for a while.  The
> repro case I had ran the 12 fio's against 12 different file systems
> with the following configuration:
>
> [global]
> direct=1
> ioengine=libaio
> iodepth=1
> bs=4k
> ba=4k
> size=128m
>
> [create]
> filename=${TESTDIR}
> rw=write
>
> ... and would leave a few inodes with elevated i_ioend_counts, which
> means any attempt to delete those inodes or to unmount the file system
> owning those inodes would hang forever.
>
> With your patch this problem goes away.
>
>>I *think* this is the correct fix but am not too familiar with code
>>path, so please proceed with caution.
>
> Looks good to me.  Thanks, applied.
>
>>Thank you.
>
> No, thank *you*!  :-)
>
>                                        - Ted
>
> P.S.  It would be nice to get this into xfstests, but it requires at
> least 10-12 (12 to repro it reliably) HDD's, and a fairly high core
> count machine in order to reproduce it.  I played around with trying
> to create a reproducer that worked on a smaller number of disks and/or
> fio's/CPU's, but I was never able to manage it.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ