[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimm8o6FnDon=eMTepDaoViU9tjteAYE9kmJhMsx@mail.gmail.com>
Date: Wed, 9 Feb 2011 08:56:07 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Dan Magenheimer <dan.magenheimer@...cle.com>
Cc: gregkh@...e.de, Chris Mason <chris.mason@...cle.com>,
akpm@...ux-foundation.org, torvalds@...ux-foundation.org,
matthew@....cx, linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ngupta@...are.org, jeremy@...p.org,
Kurt Hackel <kurt.hackel@...cle.com>, npiggin@...nel.dk,
riel@...hat.com, Konrad Wilk <konrad.wilk@...cle.com>,
mel@....ul.ie, kosaki.motohiro@...fujitsu.com,
sfr@...b.auug.org.au, wfg@...l.ustc.edu.cn, tytso@....edu,
viro@...iv.linux.org.uk, hughd@...gle.com, hannes@...xchg.org
Subject: Re: [PATCH V2 2/3] drivers/staging: zcache: host services and PAM services
On Wed, Feb 9, 2011 at 8:27 AM, Dan Magenheimer
<dan.magenheimer@...cle.com> wrote:
> Hi Minchan --
>
>> First of all, thanks for endless effort.
>
> Sometimes it does seem endless ;-)
>
>> I didn't look at code entirely but it seems this series includes
>> frontswap.
>
> The "new zcache" optionally depends on frontswap, but frontswap is
> a separate patchset. If the frontswap patchset is present
> and configured, zcache will use it to dynamically compress swap pages.
> If frontswap is not present or not configured, zcache will only
> use cleancache to dynamically compress clean page cache pages.
> For best results, both frontswap and cleancache should be enabled.
> (and see the link in PATCH V2 0/3 for a monolithic patch against
> 2.6.37 that enabled both).
>
>> Finally frontswap is to replace zram?
>
> Nitin and I have agreed that, for now, both frontswap and zram
> should continue to exist. They have similar functionality but
> different use models. Over time we will see if they can be merged.
>
> Nitin and I agreed offlist that the following summarizes the
> differences between zram and frontswap:
>
> ===========
>
> Zram uses an asynchronous model (e.g. uses the block I/O subsystem)
> and requires a device to be explicitly created. When used for
> swap, mkswap creates a fixed-size swap device (usually with higher
> priority than any disk-based swap device) and zram is filled
> until it is full, at which point other lower-priority (disk-based)
> swap devices are then used. So zram is well-suited for a fixed-
> size-RAM machine with a known workload where an administrator
> can pre-configure a zram device to improve RAM efficiency during
> peak memory load.
>
> Frontswap uses a synchronous model, circumventing the block I/O
> subsystem. The frontswap "device" is completely dynamic in size,
> e.g. frontswap is queried for every individual page-to-be-swapped
> and, if rejected, the page is swapped to the "real" swap device.
> So frontswap is well-suited for highly dynamic conditions where
> workload is unpredictable and/or RAM size may "vary" due to
> circumstances not entirely within the kernel's control.
>
> ==========
>
> Does that make sense?
Thanks for the quick reply.
As I read your comment, I can't find the benefit of zram compared to frontswap.
1. asynchronous model
2. usability
3. adaptive dynamic ram size
If I consider your statement, with 2, 3, zram isn't better than
fronswap, I think.
1 on zram may be good than frontswap but I doubt how much we have a
big benefit on async operation in ramdisk model.
If we have a big overhead of block stuff in such a model, couldn't we
remove the overhead generally?
What I can think of benefit is that zram export interface to block
device so someone can use compressed block device.
Block device interface exporting is enough to live zram in there?
Maybe I miss something of zram's benefits.
At least, I can't convince why zram and frontswap should coexist.
AFAIK, Nitin and you discussed it many times long time ago but I
didn't follow up it. Sorry if I am missing something.
Thanks.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists