lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C45A9BA.1090903@vflare.org>
Date:	Tue, 20 Jul 2010 19:20:50 +0530
From:	Nitin Gupta <ngupta@...are.org>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	Pekka Enberg <penberg@...helsinki.fi>,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Greg KH <greg@...ah.com>, Rik van Riel <riel@...hat.com>,
	Avi Kivity <avi@...hat.com>,
	Christoph Hellwig <hch@...radead.org>,
	Minchan Kim <minchan.kim@...il.com>,
	Konrad Wilk <konrad.wilk@...cle.com>,
	linux-mm <linux-mm@...ck.org>,
	linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/8] zcache: page cache compression support

On 07/20/2010 01:27 AM, Dan Magenheimer wrote:
>> We only keep pages that compress to PAGE_SIZE/2 or less. Compressed
>> chunks are
>> stored using xvmalloc memory allocator which is already being used by
>> zram
>> driver for the same purpose. Zero-filled pages are checked and no
>> memory is
>> allocated for them.
> 
> I'm curious about this policy choice.  I can see why one
> would want to ensure that the average page is compressed
> to less than PAGE_SIZE/2, and preferably PAGE_SIZE/2
> minus the overhead of the data structures necessary to
> track the page.  And I see that this makes no difference
> when the reclamation algorithm is random (as it is for
> now).  But once there is some better reclamation logic,
> I'd hope that this compression factor restriction would
> be lifted and replaced with something much higher.  IIRC,
> compression is much more expensive than decompression
> so there's no CPU-overhead argument here either,
> correct?
> 
>

Its true that we waste CPU cycles for every incompressible page
encountered but still we can't keep such pages in RAM since this
is what host wanted to reclaim and we can't help since compression
failed. Compressed caching makes sense only when we keep highly
compressible pages in RAM, regardless of reclaim scheme.

Keeping (nearly) incompressible pages in RAM probably makes sense
for Xen's case where cleancache provider runs *inside* a VM, sending
pages to host. So, if VM is limited to say 512M and host has 64G RAM,
caching guest pages, with or without compression, will help.

Thanks,
Nitin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ