[<prev] [next>] [day] [month] [year] [list]
Message-ID: <CAA7+ByWuSVzKsgCwyrrac=faWuGY9FxHpeSiJ0-WtP7DVg4y1g@mail.gmail.com>
Date: Tue, 5 Jun 2012 14:34:25 +0800
From: Hong zhi guo <honkiko@...il.com>
To: linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
zhiguo.hong@....com
Subject: [PATCH] vmalloc: walk vmap_areas by sorted list instead of rb_next()
There's a walk by repeating rb_next to find a suitable hole. Could be
simply replaced by walk on the sorted vmap_area_list. More simpler and
efficient.
Mutation of the list and tree only happens in pair within
__insert_vmap_area and __free_vmap_area, under protection of
vmap_area_lock. The patch code is also under vmap_area_lock, so the
list walk is safe, and consistent with the tree walk.
Tested on SMP by repeating batch of vmalloc anf vfree for random sizes
and rounds for hours.
Signed-off-by: Hong Zhiguo <honkiko@...il.com>
---
mm/vmalloc.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 2aad499..0eb5347 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -413,11 +413,11 @@ nocache:
if (addr + size - 1 < addr)
goto overflow;
- n = rb_next(&first->rb_node);
- if (n)
- first = rb_entry(n, struct vmap_area, rb_node);
- else
+ if (list_is_last(&first->list, &vmap_area_list))
goto found;
+
+ first = list_entry(first->list.next,
+ struct vmap_area, list);
}
found:
--
1.7.4.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists