lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR04MB5816C82F708612381216895BE7470@BYAPR04MB5816.namprd04.prod.outlook.com>
Date:   Thu, 28 Nov 2019 02:10:28 +0000
From:   Damien Le Moal <Damien.LeMoal@....com>
To:     Ritesh Harjani <riteshh@...ux.ibm.com>,
        "linux-f2fs-devel@...ts.sourceforge.net" 
        <linux-f2fs-devel@...ts.sourceforge.net>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Jaegeuk Kim <jaegeuk@...nel.org>, Chao Yu <yuchao0@...wei.com>
CC:     "linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
        Javier Gonzalez <javier@...igon.com>,
        Shinichiro Kawasaki <shinichiro.kawasaki@....com>
Subject: Re: [PATCH] f2fs: Fix direct IO handling

On 2019/11/26 17:34, Ritesh Harjani wrote:
> Hello Damien,
> 
> IIUC, you are trying to fix a stale data read by DIO read for the case
> you explained in your patch w.r.t. DIO-write forced to write as buffIO.
> 
> Coincidentally I was just looking at the same code path just now.
> So I do have a query to you/f2fs group. Below could be silly one, as I
> don't understand F2FS in great detail.
> 
> How is the stale data by DIO read, is protected against a mmap
> writes via f2fs_vm_page_mkwrite?
> 
> f2fs_vm_page_mkwrite()		 f2fs_direct_IO (read)
> 					filemap_write_and_wait_range()
> 	-> f2fs_get_blocks()				
> 					 -> submit_bio()
> 
> 	-> set_page_dirty()
> 
> Is above race possible with current f2fs code?
> i.e. f2fs_direct_IO could read the stale data from the blocks
> which were allocated due to mmap fault?

The faulted page is locked until the fault is fully processed so direct
IO has to wait for that to complete first.

> 
> Am I missing something here?
> 
> -ritesh
> 
> On 11/26/19 1:27 PM, Damien Le Moal wrote:
>> f2fs_preallocate_blocks() identifies direct IOs using the IOCB_DIRECT
>> flag for a kiocb structure. However, the file system direct IO handler
>> function f2fs_direct_IO() may have decided that a direct IO has to be
>> exececuted as a buffered IO using the function f2fs_force_buffered_io().
>> This is the case for instance for volumes including zoned block device
>> and for unaligned write IOs with LFS mode enabled.
>>
>> These 2 different methods of identifying direct IOs can result in
>> inconsistencies generating stale data access for direct reads after a
>> direct IO write that is treated as a buffered write. Fix this
>> inconsistency by combining the IOCB_DIRECT flag test with the result
>> of f2fs_force_buffered_io().
>>
>> Reported-by: Javier Gonzalez <javier@...igon.com>
>> Signed-off-by: Damien Le Moal <damien.lemoal@....com>
>> ---
>>   fs/f2fs/data.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>> index 5755e897a5f0..8ac2d3b70022 100644
>> --- a/fs/f2fs/data.c
>> +++ b/fs/f2fs/data.c
>> @@ -1073,6 +1073,8 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>   	int flag;
>>   	int err = 0;
>>   	bool direct_io = iocb->ki_flags & IOCB_DIRECT;
>> +	bool do_direct_io = direct_io &&
>> +		!f2fs_force_buffered_io(inode, iocb, from);
>>   
>>   	/* convert inline data for Direct I/O*/
>>   	if (direct_io) {
>> @@ -1081,7 +1083,7 @@ int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *from)
>>   			return err;
>>   	}
>>   
>> -	if (direct_io && allow_outplace_dio(inode, iocb, from))
>> +	if (do_direct_io && allow_outplace_dio(inode, iocb, from))
>>   		return 0;
>>   
>>   	if (is_inode_flag_set(inode, FI_NO_PREALLOC))
>>
> 
> 


-- 
Damien Le Moal
Western Digital Research

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ