[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1452818184-2994-1-git-send-email-junil0814.lee@lge.com>
Date: Fri, 15 Jan 2016 09:36:24 +0900
From: Junil Lee <junil0814.lee@....com>
To: <minchan@...nel.org>, <ngupta@...are.org>
CC: sergey.senozhatsky.work@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Junil Lee <junil0814.lee@....com>
Subject: [PATCH] zsmalloc: fix migrate_zspage-zs_free race condition
To prevent unlock at the not correct situation, tagging the new obj to
assure lock in migrate_zspage() before right unlock path.
Two functions are in race condition by tag which set 1 on last bit of
obj, however unlock succrently when update new obj to handle before call
unpin_tag() which is right unlock path.
summarize this problem by call flow as below:
CPU0 CPU1
migrate_zspage
find_alloced_obj()
trypin_tag() -- obj |= HANDLE_PIN_BIT
obj_malloc() -- new obj is not set zs_free
record_obj() -- unlock and break sync pin_tag() -- get lock
unpin_tag()
Signed-off-by: Junil Lee <junil0814.lee@....com>
---
mm/zsmalloc.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index e7414ce..bb459ef 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1635,6 +1635,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class,
free_obj = obj_malloc(d_page, class, handle);
zs_object_copy(free_obj, used_obj, class);
index++;
+ free_obj |= BIT(HANDLE_PIN_BIT);
record_obj(handle, free_obj);
unpin_tag(handle);
obj_free(pool, class, used_obj);
--
2.6.2
Powered by blists - more mailing lists