[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <40880.1541434328@turing-police.cc.vt.edu>
Date: Mon, 05 Nov 2018 11:12:08 -0500
From: valdis.kletnieks@...edu
To: Pintu Agarwal <pintu.ping@...il.com>
Cc: linux-mm@...ck.org, open list <linux-kernel@...r.kernel.org>,
kernelnewbies@...nelnewbies.org
Subject: Re: Creating compressed backing_store as swapfile
On Mon, 05 Nov 2018 20:31:46 +0530, Pintu Agarwal said:
> I wanted to have a swapfile (64MB to 256MB) on my system.
> But I wanted the data to be compressed and stored on the disk in my swapfile.
> [Similar to zram, but compressed data should be moved to disk, instead of RAM].
What platform are you on that you're both storage constrained enough to need
swap, and also so short on disk space that compressing it makes sense?
Understanding the hardware constraints here would help in advising you.
> Note: I wanted to optimize RAM space, so performance is not important
> right now for our requirement.
>
> So, what are the options available, to perform this in 4.x kernel version.
> My Kernel: 4.9.x
Given that this is a greenfield development, why are you picking a kernel
that's 2 years out of date? You *do* realize that 4.9.135 does *not* contain
all the bugfixes since then, only that relatively small subset that qualify for
'stable' (see Documentation/process/stable-kernel-rules.rst for the gory
details).
One possible total hack would be to simply use a file-based swap area,
but put the file on a filesystem that supports automatic inline compression.
Note that this will probably *totally* suck on performance, because there's
no good way to find where 4K block 11,493 starts inside the compressed
file, so it would have to read/decompress from the file beginning. Also,
if you write data to a previously unused location (or even a previously used
spot that compressed the 4K page to a different length), you have a bad time
inserting it. (Note that zram can avoid most of this because it can (a) keep
a table of pointers to where each page starts and (b) it isn't constrained to
writing to 4K blocks on disk, so if the current compression takes a 4K page down
to 1,283 bytes, it doesn't have to care *too* much if it stores that someplace
that crosses a page boundary.
Another thing that you will need to worry about is what happens in low-memory
situations - the time you *most* need to do a swap operation, you may not have
enough memory to do the I/O. zram basically makes sure it *has* the memory
needed beforehand, and swap directly to pre-allocated disk doesn't need much
additional memory.
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists