lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8c635c13-e0eb-5d29-d7c2-1edb9f75d6af@kernel.org>
Date:   Fri, 9 Sep 2016 00:09:28 +0800
From:   Chao Yu <chao@...nel.org>
To:     Jaegeuk Kim <jaegeuk@...nel.org>
Cc:     Chao Yu <yuchao0@...wei.com>, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org,
        linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [f2fs-dev] [PATCH] f2fs: merge WRITE bio into previous WRITE_SYNC

On 2016/9/8 8:26, Jaegeuk Kim wrote:
> On Wed, Sep 07, 2016 at 10:12:17PM +0800, Chao Yu wrote:
>> On 2016/9/3 2:36, Jaegeuk Kim wrote:
>>> On Fri, Sep 02, 2016 at 03:33:33PM +0800, Chao Yu wrote:
>>>> Hi Jaegeuk,
>>>>
>>>> On 2016/8/27 8:53, Jaegeuk Kim wrote:
>>>>> This can avoid bio splits due to different op_flags.
>>>>
>>>> I thought about this, but I think this is not a good idea to increase merging
>>>> ratio of pages in bio. It breaks the rule of SYNC/ASYNC IO defined by system
>>>> which indicate degree of IO emergency, finally, some/more non-emergent IO will
>>>> treated as emergent one by IO scheduler, it will interrupt SYNC IOs in block
>>>> layer, more seriously, it may make real SYNC IO starvation.
>>>
>>> I understand your concern.
>>> Originally, I tried to avoid breaking a big WRITE_SYNC by a small number of
>>
>> Hmm.. I'm worry about the opposite case: user triggers small WRITE_SYNC IO
>> periodically, meanwhile there are big number of WRITE, with our new approach,
>> actually we will increase the number of synchronous WRITE IO obviously because
>> we will mix ASYNC/SYNC WRITE into bio cache intensively more than before since
>> we drop writepages mutexlock. So I'm afread the result is that it will mislead
>> scheduling of block layer.
>>
>>> WRITE. And, I thought new WRITE can be piggybacked into previous WRITE_SYNC.
>>>
>>> IMO, this happens very occassionally since previous pending bio should be
>>> WRITE_SYNC while a new request is WRITE. Even if this happens, the piggybacked
>>> size would not exceed over bio's max pages.
>>> If lots of WRITE come, we won't change at all.
>>
>> I thinks this is related to writeback / blocklayer / cgroup subsystem which use
>> this tag frequently, maybe we should Cc their's mailing list for more opinion...
> 
> Except cgroup, since we do not support it yet. :P

Yeap.

> 
> Anyway, I think we'd better verify the effect of this for a while.
> For example, I'm able to write a simple program to measure fsync latency while
> a bunch of buffered writes.
> Meanwhile, I'll put it back to the end of dev-test repo. :)

Sounds good plan. Hoping we will not suffer from regression here. ;)

Thanks,

> 
> Thanks,
> 
>>
>> What's your opinion? :)
>>
>> thanks,
>>
>>>
>>> Thanks,
>>>
>>>>
>>>> Thanks,
>>>>
>>>>>
>>>>> Signed-off-by: Jaegeuk Kim <jaegeuk@...nel.org>
>>>>> ---
>>>>>  fs/f2fs/data.c | 5 +++++
>>>>>  1 file changed, 5 insertions(+)
>>>>>
>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>>> index 7c8e219..c7c2022 100644
>>>>> --- a/fs/f2fs/data.c
>>>>> +++ b/fs/f2fs/data.c
>>>>> @@ -267,6 +267,11 @@ void f2fs_submit_page_mbio(struct f2fs_io_info *fio)
>>>>>  
>>>>>  	down_write(&io->io_rwsem);
>>>>>  
>>>>> +	/* WRITE can be merged into previous WRITE_SYNC */
>>>>> +	if (io->bio && io->last_block_in_bio == fio->new_blkaddr - 1 &&
>>>>> +			io->fio.op == fio->op && io->fio.op_flags == WRITE_SYNC)
>>>>> +		fio->op_flags = WRITE_SYNC;
>>>>> +
>>>>>  	if (io->bio && (io->last_block_in_bio != fio->new_blkaddr - 1 ||
>>>>>  	    (io->fio.op != fio->op || io->fio.op_flags != fio->op_flags)))
>>>>>  		__submit_merged_bio(io);
>>>>>
>>>
>>> ------------------------------------------------------------------------------
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@...ts.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ