[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150704070353.GE15817@jaegeuk-mac02.hsd1.ca.comcast.net>
Date: Sat, 4 Jul 2015 00:03:53 -0700
From: Jaegeuk Kim <jaegeuk@...nel.org>
To: Chao Yu <yuchaochina@...mail.com>
Cc: linux-kernel@...r.kernel.org,
linux-f2fs-devel@...ts.sourceforge.net
Subject: Re: [PATCH] f2fs: reduce lock overhead of extent node releasing
On Thu, Jul 02, 2015 at 08:40:12PM +0800, Chao Yu wrote:
> >From e5c6600d01c4462c4e1ee0c70ec1d9319862077d Mon Sep 17 00:00:00 2001
> From: Chao Yu <chao2.yu@...sung.com>
> Date: Thu, 2 Jul 2015 18:52:46 +0800
> Subject: [PATCH] f2fs: reduce lock overhead of extent node releasing
>
> Open and close critical section for each extent node when traversing rb-tree
> results in high overhead of cpu, slows thing down.
>
> This patch alternates to batch mode for removing extent nodes under spin lock.
>
> Signed-off-by: Chao Yu <chao2.yu@...sung.com>
> ---
> fs/f2fs/data.c | 28 ++++++++++++++++++++--------
> 1 file changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 6a706dd..7fb56a0 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -441,19 +441,31 @@ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
> struct extent_node *en;
> unsigned int count = et->count;
>
> - node = rb_first(&et->root);
> - while (node) {
> - next = rb_next(node);
> - en = rb_entry(node, struct extent_node, rb_node);
> + if (!et->count)
> + return 0;
> +
> + /* 1. remove all extent nodes in global lru list */
> + if (free_all) {
> + spin_lock(&sbi->extent_lock);
> + node = rb_first(&et->root);
> + while (node) {
> + next = rb_next(node);
> + en = rb_entry(node, struct extent_node, rb_node);
>
> - if (free_all) {
> - spin_lock(&sbi->extent_lock);
> if (!list_empty(&en->list))
> list_del_init(&en->list);
> - spin_unlock(&sbi->extent_lock);
> + node = next;
> }
> + spin_unlock(&sbi->extent_lock);
> + }
> +
> + /* 2. release all extent nodes which are not in global lru list */
Hmm,
Is there any overhead to traverse the rb_tree twice and any spin_lock delay
caused by contention?
Thanks,
> + node = rb_first(&et->root);
> + while (node) {
> + next = rb_next(node);
> + en = rb_entry(node, struct extent_node, rb_node);
>
> - if (free_all || list_empty(&en->list)) {
> + if (list_empty(&en->list)) {
> __detach_extent_node(sbi, et, en);
> kmem_cache_free(extent_node_slab, en);
> }
> --
> 2.4.2
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists