[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170629092930.GC22335@bbox>
Date: Thu, 29 Jun 2017 18:29:30 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Juneho Choi <juno.choi@....com>,
kernel-team <kernel-team@....com>
Subject: Re: [PATCH v1 0/7] writeback incompressible pages to storage
On Thu, Jun 29, 2017 at 06:17:13PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (06/29/17 17:47), Minchan Kim wrote:
> [..]
> > > > This patch supports writeback feature of zram so admin can set up
> > > > a block device and with it, zram can save the memory via writing
> > > > out the incompressile pages once it found it's incompressible pages
> > > > (1/4 comp ratio) instead of keeping the page in memory.
> > >
> > > hm, alternative idea. just an idea. can we try compressing the page
> > > with another algorithm? example: downcast from lz4 to zlib? we can
> > > set up a fallback "worst case" algorithm, so each entry can contain
> > > additional flag that would tell if the src page was compressed with
> > > the fast or slow algorithm. that sounds to me easier than "create a
> > > new block device and bond it to zram, etc". but I may be wrong.
> >
> > We tried it although it was static not dynamic adatation you suggested.
>
> could you please explain more? I'm not sure I understand what
> was the configuration (what is static adaptation?).
echo inflate > /sys/block/zramX/comp_algorighm
>
> > However problem was media-stream data so zlib, lzam added just pointless
> > overhead.
>
> would that overhead be bigger than a full-blown I/O request to
> another block device (potentially slow, or under load, etc. etc.)?
The problem is not a overhead but memeory saving.
Although we use higher compression algorithm like zlib, lzma,
the comp ratio was not different with lzo and lz4 so it was
added pointless overhead without any saving.
Powered by blists - more mailing lists