lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Sep 2014 13:05:11 -0400
From:	Dan Streetman <>
To:	Minchan Kim <>
Cc:	Linux-MM <>,
	linux-kernel <>,
	Sergey Senozhatsky <>,
	Nitin Gupta <>,
	Seth Jennings <>,
	Andrew Morton <>,
	Mel Gorman <>
Subject: Re: [PATCH 00/10] implement zsmalloc shrinking

On Fri, Sep 12, 2014 at 1:46 AM, Minchan Kim <> wrote:
> On Thu, Sep 11, 2014 at 04:53:51PM -0400, Dan Streetman wrote:
>> Now that zswap can use zsmalloc as a storage pool via zpool, it will
>> try to shrink its zsmalloc zs_pool once it reaches its max_pool_percent
>> limit.  These patches implement zsmalloc shrinking.  The way the pool is
>> shrunk is by finding a zspage and reclaiming it, by evicting each of its
>> objects that is in use.
>> Without these patches zswap, and any other future user of zpool/zsmalloc
>> that attempts to shrink the zpool/zs_pool, will only get errors and will
>> be unable to shrink its zpool/zs_pool.  With the ability to shrink, zswap
>> can keep the most recent compressed pages in memory.
>> Note that the design of zsmalloc makes it impossible to actually find the
>> LRU zspage, so each class and fullness group is searched in a round-robin
>> method to find the next zspage to reclaim.  Each fullness group orders its
>> zspages in LRU order, so the oldest zspage is used for each fullness group.
> 1. Pz, Cc Mel who was strong against zswap with zsmalloc.
> 2. I don't think LRU stuff should be in allocator layer. Exp, it's really
>    hard to work well in zsmalloc design.

I didn't add any LRU - the existing fullness group LRU ordering is
already there.  And yes, the zsmalloc design prevents any real LRU
ordering, beyond per-fullness-group LRU ordering.

> 3. If you want to add another writeback, make zswap writeback sane first.
>    current implemenation(zswap store -> zbud reclaim -> zswap writeback,
>    even) is really ugly.

why what's wrong with that?  how else can zbud/zsmalloc evict stored objects?

> 4. Don't make zsmalloc complicated without any data(benefit, regression)
>    I will never ack if you don't give any number and real usecase.

ok, i'll run performance tests then, but let me know if you see any
technical problems with any of the patches before then.


>> ---
>> This patch set applies to linux-next.
>> Dan Streetman (10):
>>   zsmalloc: fix init_zspage free obj linking
>>   zsmalloc: add fullness group list for ZS_FULL zspages
>>   zsmalloc: always update lru ordering of each zspage
>>   zsmalloc: move zspage obj freeing to separate function
>>   zsmalloc: add atomic index to find zspage to reclaim
>>   zsmalloc: add zs_ops to zs_pool
>>   zsmalloc: add obj_handle_is_free()
>>   zsmalloc: add reclaim_zspage()
>>   zsmalloc: add zs_shrink()
>>   zsmalloc: implement zs_zpool_shrink() with zs_shrink()
>>  drivers/block/zram/zram_drv.c |   2 +-
>>  include/linux/zsmalloc.h      |   7 +-
>>  mm/zsmalloc.c                 | 314 +++++++++++++++++++++++++++++++++++++-----
>>  3 files changed, 290 insertions(+), 33 deletions(-)
>> --
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to  For more info on Linux MM,
>> see: .
>> Don't email: <a href=mailto:""> </a>
> --
> Kind regards,
> Minchan Kim
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists