lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 15 Sep 2015 13:22:16 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Dan Streetman <ddstreet@...e.org>
Cc:	Vlastimil Babka <vbabka@...e.cz>,
	Vitaly Wool <vitalywool@...il.com>,
	Minchan Kim <minchan@...nel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Linux-MM <linux-mm@...ck.org>
Subject: Re: [PATCH 0/3] allow zram to use zbud as underlying allocator

On (09/15/15 00:08), Dan Streetman wrote:
[..]
> 
> it doesn't.  but it has a complex (compared to zbud) way of storing
> pages - many different classes, which each are made up of zspages,
> which contain multiple actual pages to store some number of
> specifically sized objects.  So it can get fragmented, with lots of
> zspages with empty spaces for objects.  That's what the recently added
> zsmalloc compaction addresses, by scanning all the zspages in all the
> classes and compacting zspages within each class.
> 

correct. a bit of internals: we don't scan all the zspages every
time. each class has stats for allocated used objects, allocated
used objects, etc. so we 'compact' only classes that can be
compacted:

 static unsigned long zs_can_compact(struct size_class *class)
 {
         unsigned long obj_wasted;
 
         obj_wasted = zs_stat_get(class, OBJ_ALLOCATED) -
                 zs_stat_get(class, OBJ_USED);
 
         obj_wasted /= get_maxobj_per_zspage(class->size,
                         class->pages_per_zspage);
 
         return obj_wasted * class->pages_per_zspage;
 }

if we can free any zspages (which is at least one page), then we
attempt to do so.

is compaction the root cause of the symptoms Vitaly observe?


Vitaly, if you disable compaction:

---

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 14fc466..d9b5427 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1944,8 +1944,9 @@ struct zs_pool *zs_create_pool(char *name, gfp_t flags)
         * Not critical, we still can use the pool
         * and user can trigger compaction manually.
         */
-       if (zs_register_shrinker(pool) == 0)
+/*     if (zs_register_shrinker(pool) == 0)
                pool->shrinker_enabled = true;
+*/
        return pool;
 
 err:


---

does the 'problem' go away?


> but I haven't followed most of the recent zsmalloc updates too
> closely, so I may be totally wrong :-)
> 
> zbud is much simpler; since it just uses buddied pairs, it simply
> keeps a list of zbud page with only 1 compressed page stored in it.
> There is still the possibility of fragmentation, but since it's
> simple, it's much smaller.  And there is no compaction implemented in
> it, currently.  The downside, as we all know, is worse efficiency in
> storing compressed pages - it can't do better than 2:1.
> 

	-ss
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ