[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F207AA5.2080309@mikemestnik.net>
Date: Wed, 25 Jan 2012 15:56:53 -0600
From: Mike Mestnik <cheako@...emestnik.net>
To: linux-kernel@...r.kernel.org
Subject: Tiered swap for zram.
zram is great, but it can fill up easily and when it does new/fresh
pages get pushed out to disk swap. I've written a small bash script
that will flush zram swap to disk every five min. However this is
better done with the VM subsystem and I suggest that swap mounts have a
tier as well as the current priority.
( while sleep 300; do for ech in /dev/zram?; do swapoff $ech& done; for
ech in /dev/zram?; do wait; done; for ech in /dev/zram?; do swapon -p 5
$ech& done; done; )& disown
The idea is that after a page has been in swap tier X for, say five min,
it will graduate to swap tier Y. This keeps swap tier X free from any
long standing pages that are just taking up valuable realestate.
This script seams to do what I'm asking for, but at a heavy cost.
My second suggestion may take a while longer to implement. It involves
adding a new bit-field to the record for each page. This bitfield would
indicate the compression level/type of the contents of a page. For
example zram would set the bit corresponding to the compresion it's
configured to use.
This would allow zram to refuse requests to swap pages that are already
compressed. Thus further allow it to swap it's self. When a swap mount
refuses a page this would be equivalent to that swap space being full
and the next swap mount would be used.
Future advancements of this bit-field may indicate the contents of
userspace pages, zram and others can mark a page as not vary well
compressible. As such it's clear that this bit-field should be cleared
or marked dirty if the page is ever written to.
Thank you for a few moments of your time. I hope that these could be
implemented.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists