[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ad311ee5-fe8f-9fbc-1e7a-7bfc379d268c@huawei.com>
Date: Sat, 30 Nov 2019 15:27:29 +0800
From: Chao Yu <yuchao0@...wei.com>
To: Ritesh Harjani <riteshh@...ux.ibm.com>,
Damien Le Moal <Damien.LeMoal@....com>,
"linux-f2fs-devel@...ts.sourceforge.net"
<linux-f2fs-devel@...ts.sourceforge.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Jaegeuk Kim <jaegeuk@...nel.org>
CC: Javier Gonzalez <javier@...igon.com>,
Shinichiro Kawasaki <shinichiro.kawasaki@....com>
Subject: Re: [PATCH] f2fs: Fix direct IO handling
On 2019/11/28 18:20, Ritesh Harjani wrote:
>
>
> On 11/28/19 7:40 AM, Damien Le Moal wrote:
>> On 2019/11/26 17:34, Ritesh Harjani wrote:
>>> Hello Damien,
>>>
>>> IIUC, you are trying to fix a stale data read by DIO read for the case
>>> you explained in your patch w.r.t. DIO-write forced to write as buffIO.
>>>
>>> Coincidentally I was just looking at the same code path just now.
>>> So I do have a query to you/f2fs group. Below could be silly one, as I
>>> don't understand F2FS in great detail.
>>>
>>> How is the stale data by DIO read, is protected against a mmap
>>> writes via f2fs_vm_page_mkwrite?
>>>
>>> f2fs_vm_page_mkwrite() f2fs_direct_IO (read)
>>> filemap_write_and_wait_range()
>>> -> f2fs_get_blocks()
>>> -> submit_bio()
>>>
>>> -> set_page_dirty()
>>>
>>> Is above race possible with current f2fs code?
>>> i.e. f2fs_direct_IO could read the stale data from the blocks
>>> which were allocated due to mmap fault?
>>
>> The faulted page is locked until the fault is fully processed so direct
>> IO has to wait for that to complete first.
>
> How about below parallelism?
>
> f2fs_vm_page_mkwrite() f2fs_direct_IO (read)
> filemap_write_and_wait_range()
> -> down_read(->i_mmap_sem);
> -> lock_page()
> -> f2fs_get_blocks()
> -> submit_bio()
>
> -> set_page_dirty()
>
> Can above DIO read not expose the stale data from block which was
> allocated in f2fs_vm_page_mkwrite path?
The race can happen, however I doubt the race condition is more complicated
as I described in previous reply of mine, could you check that?
Thanks,
>
>
>>
>>>
>>> Am I missing something here?
>>>
>>> -ritesh
>>>
>>> On 11/26/19 1:27 PM, Damien Le Moal wrote:
>>>> f2fs_preallocate_blocks() identifies direct IOs using the IOCB_DIRECT
>>>> flag for a kiocb structure. However, the file system direct IO handler
>>>> function f2fs_direct_IO() may have decided that a direct IO has to be
>>>> exececuted as a buffered IO using the function f2fs_force_buffered_io().
>>>> This is the case for instance for volumes including zoned block device
>>>> and for unaligned write IOs with LFS mode enabled.
>>>>
>>>> These 2 different methods of identifying direct IOs can result in
>>>> inconsistencies generating stale data access for direct reads after a
>>>> direct IO write that is treated as a buffered write. Fix this
>>>> inconsistency by combining the IOCB_DIRECT flag test with the result
>>>> of f2fs_force_buffered_io().
>>>>
>>>> Reported-by: Javier Gonzalez <javier@...igon.com>
>>>> Signed-off-by: Damien Le Moal <damien.lemoal@....com>
>>>> ---
>>>> fs/f2fs/data.c | 4 +++-
>>>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>> index 5755e897a5f0..8ac2d3b70022 100644
>>>> --- a/fs/f2fs/data.c
>>>> +++ b/fs/f2fs/data.c
>>>> @@ -1073,6 +1073,8 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>>> int flag;
>>>> int err = 0;
>>>> bool direct_io = iocb->ki_flags & IOCB_DIRECT;
>>>> + bool do_direct_io = direct_io &&
>>>> + !f2fs_force_buffered_io(inode, iocb, from);
>>>>
>>>> /* convert inline data for Direct I/O*/
>>>> if (direct_io) {
>>>> @@ -1081,7 +1083,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>>> return err;
>>>> }
>>>>
>>>> - if (direct_io && allow_outplace_dio(inode, iocb, from))
>>>> + if (do_direct_io && allow_outplace_dio(inode, iocb, from))
>>>> return 0;
>>>>
>>>> if (is_inode_flag_set(inode, FI_NO_PREALLOC))
>>>>
>>>
>>>
>>
>>
>
> .
>
Powered by blists - more mailing lists