[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1397907496-5993-7-git-send-email-laijs@cn.fujitsu.com>
Date: Sat, 19 Apr 2014 19:38:13 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Tejun Heo <tj@...nel.org>, <linux-kernel@...r.kernel.org>
CC: Lai Jiangshan <laijs@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jean Delvare <jdelvare@...e.de>,
Monam Agarwal <monamagarwal123@...il.com>,
Jeff Layton <jlayton@...hat.com>,
Andreas Gruenbacher <agruen@...bit.com>,
Stephen Hemminger <stephen@...workplumber.org>
Subject: [PATCH 6/9 V2] idr: avoid ping-pong
The ida callers always calls ida_pre_get() before ida_get_new*().
ida_pre_get() will always do layer allocation, which means the layer removal
in ida_get_new*() is helpless. ida_get_new*() frees and the ida_pre_get() for
the next ida_get_new*() allocates.
It causes an unneeded ping-pong. The aim "Throw away extra resources one by one"
can't be achieved. This speculative layer removal in ida_get_new*() can't
result expected optimization.
So we remove the unneeded layer removal in ida_get_new*().
Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
---
lib/idr.c | 11 -----------
1 files changed, 0 insertions(+), 11 deletions(-)
diff --git a/lib/idr.c b/lib/idr.c
index 317fd35..25fe476 100644
--- a/lib/idr.c
+++ b/lib/idr.c
@@ -1001,17 +1001,6 @@ int ida_get_new_above(struct ida *ida, int starting_id, int *p_id)
*p_id = id;
- /* Each leaf node can handle nearly a thousand slots and the
- * whole idea of ida is to have small memory foot print.
- * Throw away extra resources one by one after each successful
- * allocation.
- */
- if (ida->idr.id_free_cnt || ida->free_bitmap) {
- struct idr_layer *p = get_from_free_list(&ida->idr);
- if (p)
- kmem_cache_free(idr_layer_cache, p);
- }
-
return 0;
}
EXPORT_SYMBOL(ida_get_new_above);
--
1.7.4.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists