lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fd5b56ef-4f27-65b7-d3a2-71ef4425b452@huawei.com>
Date:   Fri, 13 Apr 2018 09:27:53 +0800
From:   Chao Yu <yuchao0@...wei.com>
To:     Jaegeuk Kim <jaegeuk@...nel.org>
CC:     <linux-f2fs-devel@...ts.sourceforge.net>,
        <linux-kernel@...r.kernel.org>, <chao@...nel.org>
Subject: Re: [PATCH] f2fs: enlarge block plug coverage

On 2018/4/13 9:06, Jaegeuk Kim wrote:
> On 04/10, Chao Yu wrote:
>> On 2018/4/10 12:10, Jaegeuk Kim wrote:
>>> On 04/10, Chao Yu wrote:
>>>> On 2018/4/10 2:02, Jaegeuk Kim wrote:
>>>>> On 04/08, Chao Yu wrote:
>>>>>> On 2018/4/5 11:51, Jaegeuk Kim wrote:
>>>>>>> On 04/04, Chao Yu wrote:
>>>>>>>> This patch enlarges block plug coverage in __issue_discard_cmd, in
>>>>>>>> order to collect more pending bios before issuing them, to avoid
>>>>>>>> being disturbed by previous discard I/O in IO aware discard mode.
>>>>>>>
>>>>>>> Hmm, then we need to wait for huge discard IO for over 10 secs, which
>>>>>>
>>>>>> We found that total discard latency is rely on total discard number we issued
>>>>>> last time instead of range or length discard covered. IMO, if we don't change
>>>>>> .max_requests value, we will not suffer longer latency.
>>>>>>
>>>>>>> will affect following read/write IOs accordingly. In order to avoid that,
>>>>>>> we actually need to limit the discard size.
>>>>
>>>> Do you mean limit discard count or discard length?
>>>
>>> Both of them.
>>>
>>>>
>>>>>>
>>>>>> If you are worry about I/O interference in between discard and rw, I suggest to
>>>>>> decrease .max_requests value.
>>>>>
>>>>> What do you mean? This will produce more pending requests in the queue?
>>>>
>>>> I mean after applying this patch, we can queue more discard IOs in plug inside
>>>> task, otherwise, previous issued discard in block layer can make is_idle() be false,
>>>> then it can stop IO awared user to issue pending discard command.
>>>
>>> Then, unplug will issue lots of discard commands, which affects the following rw
>>> latencies. My preference would be issuing discard commands one by one as much as
>>> possible.
>>
>> Hmm.. for you concern, we can turn down IO priority of discard from background?
> 
> That makes much more sense to me. :P

Then, this patch which enlarge plug coverage will not still a problem, right? ;)

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>>>
>>>> Thanks,
>>>>
>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>>>
>>>>>>> Thanks,
>>>>>>>
>>>>>>>>
>>>>>>>> Signed-off-by: Chao Yu <yuchao0@...wei.com>
>>>>>>>> ---
>>>>>>>>  fs/f2fs/segment.c | 7 +++++--
>>>>>>>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
>>>>>>>> index 8f0b5ba46315..4287e208c040 100644
>>>>>>>> --- a/fs/f2fs/segment.c
>>>>>>>> +++ b/fs/f2fs/segment.c
>>>>>>>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
>>>>>>>>  		pend_list = &dcc->pend_list[i];
>>>>>>>>  
>>>>>>>>  		mutex_lock(&dcc->cmd_lock);
>>>>>>>> +
>>>>>>>> +		blk_start_plug(&plug);
>>>>>>>> +
>>>>>>>>  		if (list_empty(pend_list))
>>>>>>>>  			goto next;
>>>>>>>>  		f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, &dcc->root));
>>>>>>>> -		blk_start_plug(&plug);
>>>>>>>>  		list_for_each_entry_safe(dc, tmp, pend_list, list) {
>>>>>>>>  			f2fs_bug_on(sbi, dc->state != D_PREP);
>>>>>>>>  
>>>>>>>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi,
>>>>>>>>  			if (++iter >= dpolicy->max_requests)
>>>>>>>>  				break;
>>>>>>>>  		}
>>>>>>>> -		blk_finish_plug(&plug);
>>>>>>>>  next:
>>>>>>>> +		blk_finish_plug(&plug);
>>>>>>>> +
>>>>>>>>  		mutex_unlock(&dcc->cmd_lock);
>>>>>>>>  
>>>>>>>>  		if (iter >= dpolicy->max_requests)
>>>>>>>> -- 
>>>>>>>> 2.15.0.55.gc2ece9dc4de6
>>>>>>>
>>>>>>> .
>>>>>>>
>>>>>
>>>>> .
>>>>>
>>>
>>> .
>>>
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ