lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1489632398-31501-2-git-send-email-iamjoonsoo.kim@lge.com>
Date:   Thu, 16 Mar 2017 11:46:35 +0900
From:   js1304@...il.com
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Minchan Kim <minchan@...nel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
        linux-kernel@...r.kernel.org, kernel-team@....com,
        Joonsoo Kim <iamjoonsoo.kim@....com>
Subject: [PATCH 1/4] mm/zsmalloc: always set movable/highmem flag to the zspage

From: Joonsoo Kim <iamjoonsoo.kim@....com>

Zspage is always movable and is used through zs_map_object() function
which returns directly accessible pointer that contains content of
zspage. It is independent on the user's allocation flag.
Therefore, it's better to always set movable/highmem flag to the zspage.
After that, we don't need __GFP_MOVABLE/__GFP_HIGHMEM clearing in
cache_alloc_handle()/cache_alloc_zspage() since there is no zs_malloc
caller who specifies __GFP_MOVABLE/__GFP_HIGHMEM.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@....com>
---
 drivers/block/zram/zram_drv.c |  9 ++-------
 mm/zsmalloc.c                 | 10 ++++------
 2 files changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 0194441..f65dcd1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -684,19 +684,14 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 	 */
 	if (!handle)
 		handle = zs_malloc(meta->mem_pool, clen,
-				__GFP_KSWAPD_RECLAIM |
-				__GFP_NOWARN |
-				__GFP_HIGHMEM |
-				__GFP_MOVABLE);
+				__GFP_KSWAPD_RECLAIM | __GFP_NOWARN);
 	if (!handle) {
 		zcomp_stream_put(zram->comp);
 		zstrm = NULL;
 
 		atomic64_inc(&zram->stats.writestall);
 
-		handle = zs_malloc(meta->mem_pool, clen,
-				GFP_NOIO | __GFP_HIGHMEM |
-				__GFP_MOVABLE);
+		handle = zs_malloc(meta->mem_pool, clen, GFP_NOIO);
 		if (handle)
 			goto compress_again;
 
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b7ee9c3..fada232 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -347,8 +347,7 @@ static void destroy_cache(struct zs_pool *pool)
 
 static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp)
 {
-	return (unsigned long)kmem_cache_alloc(pool->handle_cachep,
-			gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
+	return (unsigned long)kmem_cache_alloc(pool->handle_cachep, gfp);
 }
 
 static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
@@ -358,9 +357,8 @@ static void cache_free_handle(struct zs_pool *pool, unsigned long handle)
 
 static struct zspage *cache_alloc_zspage(struct zs_pool *pool, gfp_t flags)
 {
-	return kmem_cache_alloc(pool->zspage_cachep,
-			flags & ~(__GFP_HIGHMEM|__GFP_MOVABLE));
-}
+	return kmem_cache_alloc(pool->zspage_cachep, flags);
+};
 
 static void cache_free_zspage(struct zs_pool *pool, struct zspage *zspage)
 {
@@ -1120,7 +1118,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
 	for (i = 0; i < class->pages_per_zspage; i++) {
 		struct page *page;
 
-		page = alloc_page(gfp);
+		page = alloc_page(gfp | __GFP_MOVABLE | __GFP_HIGHMEM);
 		if (!page) {
 			while (--i >= 0) {
 				dec_zone_page_state(pages[i], NR_ZSPAGES);
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ