[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111124231848.GC5167@thunk.org>
Date: Thu, 24 Nov 2011 18:18:49 -0500
From: Ted Ts'o <tytso@....edu>
To: Tejun Heo <tj@...nel.org>
Cc: Andreas Dilger <adilger.kernel@...ger.ca>,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
Kent Overstreet <koverstreet@...gle.com>, rickyb@...gle.com,
aberkan@...gle.com
Subject: Re: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()
On Thu, Nov 24, 2011 at 11:46:26AM -0800, Tejun Heo wrote:
> ext4_end_io_dio() queues io_end->work and then clears iocb->private;
> however, io_end->work completes the iocb by calling aio_complete(),
> which may happen before io_end->work clearing thus leading to
> use-after-free.
>
> Detected and tested with slab poisoning.
>
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Reported-by: Kent Overstreet <koverstreet@...gle.com>
> Tested-by: Kent Overstreet <koverstreet@...gle.com>
> Cc: stable@...nel.org
Thanks!! I've been trying to track down this bug for a while. The
repro case I had ran the 12 fio's against 12 different file systems
with the following configuration:
[global]
direct=1
ioengine=libaio
iodepth=1
bs=4k
ba=4k
size=128m
[create]
filename=${TESTDIR}
rw=write
... and would leave a few inodes with elevated i_ioend_counts, which
means any attempt to delete those inodes or to unmount the file system
owning those inodes would hang forever.
With your patch this problem goes away.
>I *think* this is the correct fix but am not too familiar with code
>path, so please proceed with caution.
Looks good to me. Thanks, applied.
>Thank you.
No, thank *you*! :-)
- Ted
P.S. It would be nice to get this into xfstests, but it requires at
least 10-12 (12 to repro it reliably) HDD's, and a fairly high core
count machine in order to reproduce it. I played around with trying
to create a reproducer that worked on a smaller number of disks and/or
fio's/CPU's, but I was never able to manage it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists