[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160823165353.GA73835@jaegeuk>
Date: Tue, 23 Aug 2016 09:53:53 -0700
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <chao@...nel.org>
Cc: linux-f2fs-devel@...ts.sourceforge.net,
linux-kernel@...r.kernel.org, Chao Yu <yuchao0@...wei.com>
Subject: Re: [PATCH 2/3] f2fs: schedule in between two continous batch
discards
Hi Chao,
On Sun, Aug 21, 2016 at 11:21:30PM +0800, Chao Yu wrote:
> From: Chao Yu <yuchao0@...wei.com>
>
> In batch discard approach of fstrim will grab/release gc_mutex lock
> repeatly, it makes contention of the lock becoming more intensive.
>
> So after one batch discards were issued in checkpoint and the lock
> was released, it's better to do schedule() to increase opportunity
> of grabbing gc_mutex lock for other competitors.
>
> Signed-off-by: Chao Yu <yuchao0@...wei.com>
> ---
> fs/f2fs/segment.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
> index 020767c..d0f74eb 100644
> --- a/fs/f2fs/segment.c
> +++ b/fs/f2fs/segment.c
> @@ -1305,6 +1305,8 @@ int f2fs_trim_fs(struct f2fs_sb_info *sbi, struct fstrim_range *range)
> mutex_unlock(&sbi->gc_mutex);
> if (err)
> break;
> +
> + schedule();
Hmm, if other thread is already waiting for gc_mutex, we don't need this here.
In order to avoid long latency, wouldn't it be enough to reduce the batch size?
Thanks,
> }
> out:
> range->len = F2FS_BLK_TO_BYTES(cpc.trimmed);
> --
> 2.7.2
Powered by blists - more mailing lists