lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20180410041033.GE47598@jaegeuk-macbookpro.roam.corp.google.com> Date: Mon, 9 Apr 2018 21:10:33 -0700 From: Jaegeuk Kim <jaegeuk@...nel.org> To: Chao Yu <yuchao0@...wei.com> Cc: linux-f2fs-devel@...ts.sourceforge.net, linux-kernel@...r.kernel.org, chao@...nel.org Subject: Re: [PATCH] f2fs: enlarge block plug coverage On 04/10, Chao Yu wrote: > On 2018/4/10 2:02, Jaegeuk Kim wrote: > > On 04/08, Chao Yu wrote: > >> On 2018/4/5 11:51, Jaegeuk Kim wrote: > >>> On 04/04, Chao Yu wrote: > >>>> This patch enlarges block plug coverage in __issue_discard_cmd, in > >>>> order to collect more pending bios before issuing them, to avoid > >>>> being disturbed by previous discard I/O in IO aware discard mode. > >>> > >>> Hmm, then we need to wait for huge discard IO for over 10 secs, which > >> > >> We found that total discard latency is rely on total discard number we issued > >> last time instead of range or length discard covered. IMO, if we don't change > >> .max_requests value, we will not suffer longer latency. > >> > >>> will affect following read/write IOs accordingly. In order to avoid that, > >>> we actually need to limit the discard size. > > Do you mean limit discard count or discard length? Both of them. > > >> > >> If you are worry about I/O interference in between discard and rw, I suggest to > >> decrease .max_requests value. > > > > What do you mean? This will produce more pending requests in the queue? > > I mean after applying this patch, we can queue more discard IOs in plug inside > task, otherwise, previous issued discard in block layer can make is_idle() be false, > then it can stop IO awared user to issue pending discard command. Then, unplug will issue lots of discard commands, which affects the following rw latencies. My preference would be issuing discard commands one by one as much as possible. > > Thanks, > > > > >> > >> Thanks, > >> > >>> > >>> Thanks, > >>> > >>>> > >>>> Signed-off-by: Chao Yu <yuchao0@...wei.com> > >>>> --- > >>>> fs/f2fs/segment.c | 7 +++++-- > >>>> 1 file changed, 5 insertions(+), 2 deletions(-) > >>>> > >>>> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c > >>>> index 8f0b5ba46315..4287e208c040 100644 > >>>> --- a/fs/f2fs/segment.c > >>>> +++ b/fs/f2fs/segment.c > >>>> @@ -1208,10 +1208,12 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi, > >>>> pend_list = &dcc->pend_list[i]; > >>>> > >>>> mutex_lock(&dcc->cmd_lock); > >>>> + > >>>> + blk_start_plug(&plug); > >>>> + > >>>> if (list_empty(pend_list)) > >>>> goto next; > >>>> f2fs_bug_on(sbi, !__check_rb_tree_consistence(sbi, &dcc->root)); > >>>> - blk_start_plug(&plug); > >>>> list_for_each_entry_safe(dc, tmp, pend_list, list) { > >>>> f2fs_bug_on(sbi, dc->state != D_PREP); > >>>> > >>>> @@ -1227,8 +1229,9 @@ static int __issue_discard_cmd(struct f2fs_sb_info *sbi, > >>>> if (++iter >= dpolicy->max_requests) > >>>> break; > >>>> } > >>>> - blk_finish_plug(&plug); > >>>> next: > >>>> + blk_finish_plug(&plug); > >>>> + > >>>> mutex_unlock(&dcc->cmd_lock); > >>>> > >>>> if (iter >= dpolicy->max_requests) > >>>> -- > >>>> 2.15.0.55.gc2ece9dc4de6 > >>> > >>> . > >>> > > > > . > >
Powered by blists - more mailing lists