lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 04 Jun 2010 15:06:49 +0530
From:	Nitin Gupta <ngupta@...are.org>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	andreas.dilger@...cle.com, Minchan Kim <minchan.kim@...il.com>,
	chris.mason@...cle.com, viro@...iv.linux.org.uk,
	akpm@...ux-foundation.org, adilger@....com, tytso@....edu,
	mfasheh@...e.com, joel.becker@...cle.com, matthew@....cx,
	linux-btrfs@...r.kernel.org, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
	ocfs2-devel@....oracle.com, linux-mm@...ck.org, jeremy@...p.org,
	JBeulich@...ell.com, kurt.hackel@...cle.com, npiggin@...e.de,
	dave.mccracken@...cle.com, riel@...hat.com, avi@...hat.com,
	konrad.wilk@...cle.com
Subject: Re: [PATCH V2 0/7] Cleancache (was Transcendent Memory): overview

On 06/03/2010 09:13 PM, Dan Magenheimer wrote:
>> On 06/03/2010 10:23 AM, Andreas Dilger wrote:
>>> On 2010-06-02, at 20:46, Nitin Gupta wrote:
>>
>>> I was thinking it would be quite clever to do compression in, say,
>>> 64kB or 128kB chunks in a mapping (to get decent compression) and
>>> then write these compressed chunks directly from the page cache
>>> to disk in btrfs and/or a revived compressed ext4.
>>
>> Batching of pages to get good compression ratio seems doable.
> 
> Is there evidence that batching a set of random individual 4K
> pages will have a significantly better compression ratio than
> compressing the pages separately?  I certainly understand that
> if the pages are from the same file, compression is likely to
> be better, but pages evicted from the page cache (which is
> the source for all cleancache_puts) are likely to be quite a
> bit more random than that, aren't they?
> 


Batching of pages from random files may not be so effective but
it would be interesting to collect some data for this. Still,
per-inode batching of pages seems doable and this should help
us get over this problem.

Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists