[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E4B25D2.8080700@tao.ma>
Date: Wed, 17 Aug 2011 10:22:10 +0800
From: Tao Ma <tm@....ma>
To: Jiaying Zhang <jiayingz@...gle.com>
CC: Dave Chinner <david@...morbit.com>, Jan Kara <jack@...e.cz>,
Michael Tokarev <mjt@....msk.ru>, linux-ext4@...r.kernel.org,
sandeen@...hat.com
Subject: Re: DIO process stuck apparently due to dioread_nolock (3.0)
Hi Jiaying,
On 08/17/2011 08:08 AM, Jiaying Zhang wrote:
> On Tue, Aug 16, 2011 at 4:59 PM, Dave Chinner <david@...morbit.com> wrote:
>> On Tue, Aug 16, 2011 at 02:32:12PM -0700, Jiaying Zhang wrote:
>>> On Tue, Aug 16, 2011 at 8:03 AM, Tao Ma <tm@....ma> wrote:
>>>> On 08/16/2011 09:53 PM, Jan Kara wrote:
>>> I wonder whether the following patch will solve the problem:
>>>
>>> diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
>>> index 6c27111..ca90d73 100644
>>> --- a/fs/ext4/indirect.c
>>> +++ b/fs/ext4/indirect.c
>>> @@ -800,12 +800,17 @@ ssize_t ext4_ind_direct_IO(int rw, struct kiocb *iocb,
>>> }
>>>
>>> retry:
>>> - if (rw == READ && ext4_should_dioread_nolock(inode))
>>> + if (rw == READ && ext4_should_dioread_nolock(inode)) {
>>> + if (unlikely(!list_empty(&ei->i_completed_io_list))) {
>>> + mutex_lock(&inode->i_mutex);
>>> + ext4_flush_completed_IO(inode);
>>> + mutex_unlock(&inode->i_mutex);
>>> + }
>>> ret = __blockdev_direct_IO(rw, iocb, inode,
>>> inode->i_sb->s_bdev, iov,
>>> offset, nr_segs,
>>> ext4_get_block, NULL, NULL, 0);
>>> - else {
>>> + } else {
>>> ret = blockdev_direct_IO(rw, iocb, inode,
>>> inode->i_sb->s_bdev, iov,
>>> offset, nr_segs,
>>>
>>> I tested the patch a little bit and it seems to resolve the race
>>> on dioread_nolock in my case. Michael, I would very appreciate
>>> if you can try this patch with your test case and see whether it works.
>>
>> Just my 2c worth here: this is a data corruption bug so the root
>> cause neeeds to be fixed. The above patch does not address the root
>> cause.
>>
>>>> You are absolutely right. The really problem is that ext4_direct_IO
>>>> begins to work *after* we clear the page writeback flag and *before* we
>>>> convert unwritten extent to a valid state. Some of my trace does show
>>>> that. I am working on it now.
>>
>> And that's the root cause - think about what that means for a
>> minute. It means that extent conversion can race with anything that
>> requires IO to complete first. e.g. truncate or fsync. It can then
>> race with other subsequent operations, which can have even nastier
>> effects. IOWs, there is a data-corruption landmine just sitting
>> there waiting for the next person to trip over it.
> You are right that extent conversion can race with truncate and fsync
> as well. That is why we already need to call ext4_flush_completed_IO()
> in those places as well. I agree this is a little nasty and there can be
> some other corner cases that we haven't covered. The problem is we
> can not do extent conversion during the end_io time. I haven't thought of
> a better approach to deal with these races. I am curious how xfs deals
> with this problem.
I agree with Dave that we may need to figure out a better way for this.
What's more, you patch has another side-effect: if there are concurrent
direct read and buffered write and even they are not interleaved, the
direct read is affected. Do you have any test data of the performance
regression?
Thanks
Tao
>
> Jiaying
>
>>
>> Fix the root cause, don't put band-aids over the symptoms.
>>
>> Cheers,
>>
>> Dave.
>> --
>> Dave Chinner
>> david@...morbit.com
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists