[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1410468841-320-3-git-send-email-ddstreet@ieee.org>
Date: Thu, 11 Sep 2014 16:53:53 -0400
From: Dan Streetman <ddstreet@...e.org>
To: Minchan Kim <minchan@...nel.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Nitin Gupta <ngupta@...are.org>,
Seth Jennings <sjennings@...iantweb.net>,
Andrew Morton <akpm@...ux-foundation.org>,
Dan Streetman <ddstreet@...e.org>
Subject: [PATCH 02/10] zsmalloc: add fullness group list for ZS_FULL zspages
Move ZS_FULL into section of fullness_group entries that are tracked in
the class fullness_lists. Without this change, full zspages are untracked
by zsmalloc; they are only moved back onto one of the tracked lists
(ZS_ALMOST_FULL or ZS_ALMOST_EMPTY) when a zsmalloc user frees one or more
of its contained objects.
This is required for zsmalloc shrinking, which needs to be able to search
all zspages in a zsmalloc pool, to find one to shrink.
Signed-off-by: Dan Streetman <ddstreet@...e.org>
Cc: Minchan Kim <minchan@...nel.org>
---
mm/zsmalloc.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 03aa72f..fedb70f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -159,16 +159,19 @@
ZS_SIZE_CLASS_DELTA + 1)
/*
- * We do not maintain any list for completely empty or full pages
+ * We do not maintain any list for completely empty zspages,
+ * since a zspage is freed when it becomes empty.
*/
enum fullness_group {
ZS_ALMOST_FULL,
ZS_ALMOST_EMPTY,
+ ZS_FULL,
+
_ZS_NR_FULLNESS_GROUPS,
ZS_EMPTY,
- ZS_FULL
};
+#define _ZS_NR_AVAILABLE_FULLNESS_GROUPS ZS_FULL
/*
* We assign a page to ZS_ALMOST_EMPTY fullness group when:
@@ -722,12 +725,12 @@ cleanup:
return first_page;
}
-static struct page *find_get_zspage(struct size_class *class)
+static struct page *find_available_zspage(struct size_class *class)
{
int i;
struct page *page;
- for (i = 0; i < _ZS_NR_FULLNESS_GROUPS; i++) {
+ for (i = 0; i < _ZS_NR_AVAILABLE_FULLNESS_GROUPS; i++) {
page = class->fullness_list[i];
if (page)
break;
@@ -1013,7 +1016,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size)
BUG_ON(class_idx != class->index);
spin_lock(&class->lock);
- first_page = find_get_zspage(class);
+ first_page = find_available_zspage(class);
if (!first_page) {
spin_unlock(&class->lock);
--
1.8.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists