[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1439474123-11279-1-git-send-email-sergey.senozhatsky@gmail.com>
Date: Thu, 13 Aug 2015 22:55:19 +0900
From: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
To: Minchan Kim <minchan@...nel.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org,
Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>
Subject: [RFC][PATCH 0/4] zram: add zlib compression bckend support
Hello,
RFC
I'll just post this series as a separate thread, I guess, sorry if it makes
any inconvenience. Joonsoo will resend his patch series, so discussions
will `relocate' anyway.
This patchset uses a different, let's say traditional, zram/zcomp approach.
it defines a new zlib compression backend same way as lzo ad lz4 are defined.
The key difference is that zlib requires zstream for both compression and
decompression. zram has stream-less decompression path for lzo and lz4, and
it works perfectly fast. In order to support zlib we need decompression
path to *optionally require* zstream. I want to make ZCOMP_NEED_READ_ZSTRM
flag (backend requires zstream for decompression) backend dependent; so we
still will have fastest lzo/lz4 possible.
This is one of the reasons I didn't implement it using crypto api -- crypto
api requires tfm for compression and decompression. Which implies that read
now either
a) has to share idle streams list with write path, thus reads and writes will
become slower
b) has to define its own idle stream list. but it does
1) limit the number of concurrently executed read operations (to the number
of stremas in the list)
2) increase memory usage by the module (each streams occupies pages for
workspace buffers, etc.)
For the time being, crypto API does not provide stream-less decompression
functions, to the best of my knowledge.
I, frankly, tempted to rewrite zram to use crypto several times. But each
time I couldn't find a real reason. Yes, it *in theory* will give people
HUGE possibilities to select compression algorithms. But the question
is -- zram has been around for quite some years, so does anybody need this
flexibility? I can easily picture people selecting between
ratio speed alg
OK compression ratio very fast LZO/LZ4
and
very good comp ratio eh... but good comp ratio zlib
But anything in the middle is just anything in the middle, IMHO. I can't
convince myself that people really want to have
"eh... comp ration" + "eh.. speed"
comp algorithm, for example.
>From https://code.google.com/p/lz4/ it seems that lzo+lz4+zlib is quite a
good package.
And zram obviously was missing the `other side' algorithm -- zlib, when IO speed
is not SO important.
I did some zlib backend testing. A copy paste from patch 0003:
Copy dir with the linux kernel to a zram device (du -sh 2.3G) and check
memory usage stats.
mm_stat fields:
orig_data_size
compr_data_size
mem_used_total
mem_limit
mem_used_max
zero_pages
num_migrated
zlib
cat /sys/block/zram0/mm_stat
2522685440 1210486447 1230729216 0 1230729216 5461 0
lzo
cat /sys/block/zram0/mm_stat
2525872128 1713351248 1738387456 0 1738387456 4682 0
ZLIB uses 484+MiB less memory in the test.
Sergey Senozhatsky (4):
zram: introduce zcomp_backend flags callback
zram: extend zcomp_backend decompress callback
zram: add zlib backend
zram: enable zlib backend support
drivers/block/zram/Kconfig | 14 ++++-
drivers/block/zram/Makefile | 1 +
drivers/block/zram/zcomp.c | 30 +++++++++-
drivers/block/zram/zcomp.h | 12 +++-
drivers/block/zram/zcomp_lz4.c | 8 ++-
drivers/block/zram/zcomp_lzo.c | 8 ++-
drivers/block/zram/zcomp_zlib.c | 120 ++++++++++++++++++++++++++++++++++++++++
drivers/block/zram/zcomp_zlib.h | 17 ++++++
drivers/block/zram/zram_drv.c | 23 ++++++--
9 files changed, 222 insertions(+), 11 deletions(-)
create mode 100644 drivers/block/zram/zcomp_zlib.c
create mode 100644 drivers/block/zram/zcomp_zlib.h
--
2.5.0
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists