[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C074ACE.9020704@vflare.org>
Date: Thu, 03 Jun 2010 11:55:18 +0530
From: Nitin Gupta <ngupta@...are.org>
To: Andreas Dilger <andreas.dilger@...cle.com>
CC: Dan Magenheimer <dan.magenheimer@...cle.com>,
Minchan Kim <minchan.kim@...il.com>, chris.mason@...cle.com,
viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
adilger@....com, tytso@....edu, mfasheh@...e.com,
joel.becker@...cle.com, matthew@....cx,
linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
ocfs2-devel@....oracle.com, linux-mm@...ck.org, jeremy@...p.org,
JBeulich@...ell.com, kurt.hackel@...cle.com, npiggin@...e.de,
dave.mccracken@...cle.com, riel@...hat.com, avi@...hat.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview
On 06/03/2010 10:23 AM, Andreas Dilger wrote:
> On 2010-06-02, at 20:46, Nitin Gupta wrote:
>> On 06/03/2010 04:32 AM, Dan Magenheimer wrote:
>>>> From: Minchan Kim [mailto:minchan.kim@...il.com]
>>>
>>>>> I am also eagerly awaiting Nitin Gupta's cleancache backend
>>>>> and implementation to do in-kernel page cache compression.
>>>>
>>>> Do Nitin say he will make backend of cleancache for
>>>> page cache compression?
>>>>
>>>> It would be good feature.
>>>> I have a interest, too. :)
>>>
>>> That was Nitin's plan for his GSOC project when we last discussed
>>> this. Nitin is on the cc list and can comment if this has
>>> changed.
>>
>> Yes, I have just started work on in-kernel page cache compression
>> backend for cleancache :)
>
> Is there a design doc for this implementation?
Its all on physical paper :)
Anyways, the design is quite simple as it simply has to act on cleancache
callbacks.
> I was thinking it would be quite clever to do compression in, say, 64kB or 128kB chunks in a mapping (to get decent compression) and then write these compressed chunks directly from the page cache to disk in btrfs and/or a revived compressed ext4.
>
Batching of pages to get good compression ratio seems doable.
However, writing this compressed data (with/without batching) to disk seems
quite difficult. Pages given out to cleancache are not part of pagecache and
the disk might also contain uncompressed version of the same data. There is
also the problem of efficient on-disk structure for storing variable sized
compressed chunks. I'm not sure how we can deal with all these issues.
Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists