[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6e97a82a-c754-493e-bbf5-58f0bb6a18b5@default>
Date: Thu, 3 Jun 2010 08:43:05 -0700 (PDT)
From: Dan Magenheimer <dan.magenheimer@...cle.com>
To: ngupta@...are.org, andreas.dilger@...cle.com
Cc: Minchan Kim <minchan.kim@...il.com>, chris.mason@...cle.com,
viro@...iv.linux.org.uk, akpm@...ux-foundation.org,
adilger@....com, tytso@....edu, mfasheh@...e.com,
joel.becker@...cle.com, matthew@....cx,
linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
ocfs2-devel@....oracle.com, linux-mm@...ck.org, jeremy@...p.org,
JBeulich@...ell.com, kurt.hackel@...cle.com, npiggin@...e.de,
dave.mccracken@...cle.com, riel@...hat.com, avi@...hat.com,
konrad.wilk@...cle.com
Subject: RE: [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview
> On 06/03/2010 10:23 AM, Andreas Dilger wrote:
> > On 2010-06-02, at 20:46, Nitin Gupta wrote:
>
> > I was thinking it would be quite clever to do compression in, say,
> > 64kB or 128kB chunks in a mapping (to get decent compression) and
> > then write these compressed chunks directly from the page cache
> > to disk in btrfs and/or a revived compressed ext4.
>
> Batching of pages to get good compression ratio seems doable.
Is there evidence that batching a set of random individual 4K
pages will have a significantly better compression ratio than
compressing the pages separately? I certainly understand that
if the pages are from the same file, compression is likely to
be better, but pages evicted from the page cache (which is
the source for all cleancache_puts) are likely to be quite a
bit more random than that, aren't they?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists