[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170628154157.GA528@tigerII.localdomain>
Date: Thu, 29 Jun 2017 00:41:57 +0900
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Juneho Choi <juno.choi@....com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
kernel-team <kernel-team@....com>
Subject: Re: [PATCH v1 0/7] writeback incompressible pages to storage
Hello,
On (06/26/17 15:52), Minchan Kim wrote:
[..]
> zRam is useful for memory saving with compressible pages but sometime,
> workload can be changed and system has lots of incompressible pages
> which is very harmful for zram.
could do. that makes zram quite complicated, to be honest. no offense,
but the whole zram's "good compression" margin looks to me completely
random and quite unreasonable. building a complex logic atop of random
logic is a bit tricky. but I see what problem you are trying to address.
> This patch supports writeback feature of zram so admin can set up
> a block device and with it, zram can save the memory via writing
> out the incompressile pages once it found it's incompressible pages
> (1/4 comp ratio) instead of keeping the page in memory.
hm, alternative idea. just an idea. can we try compressing the page
with another algorithm? example: downcast from lz4 to zlib? we can
set up a fallback "worst case" algorithm, so each entry can contain
additional flag that would tell if the src page was compressed with
the fast or slow algorithm. that sounds to me easier than "create a
new block device and bond it to zram, etc". but I may be wrong.
-ss
Powered by blists - more mailing lists