lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E5EB066.6020007@linux.vnet.ibm.com>
Date:	Wed, 31 Aug 2011 17:06:30 -0500
From:	Seth Jennings <sjenning@...ux.vnet.ibm.com>
To:	Dan Magenheimer <dan.magenheimer@...cle.com>
CC:	gregkh@...e.de, devel@...verdev.osuosl.org, ngupta@...are.org,
	cascardo@...oscopio.com, rdunlap@...otime.net,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] staging: zcache: xcfmalloc support

On 08/31/2011 02:46 PM, Dan Magenheimer wrote:
>> This patchset introduces a new memory allocator for persistent
>> pages for zcache.  The current allocator is xvmalloc.  xvmalloc
>> has two notable limitations:
>> * High (up to 50%) external fragmentation on allocation sets > PAGE_SIZE/2
>> * No compaction support which reduces page reclaimation
>>
>> xcfmalloc seeks to fix these issues by using scatter-gather model that
>> allows for cross-page allocations and relocatable data blocks.
>>
>> In tests, with pages that only compress to 75% of their original
>> size, xvmalloc had an effective compression (pages stored / pages used by the
>> compressed memory pool) of ~95% (~20% lost to fragmentation). Almost nothing
>> was gained by the compression in this case. xcfmalloc had an effective
>> compression of ~77% (about ~2% lost to fragmentation and metadata overhead).
> 
> Hi Seth --
> 
> Do you have any data comparing xcfmalloc vs xvmalloc for
> compression ratio and/or performance (cycles to compress
> or decompress different pages) on a wide(r) range of data?
> Assuming xcfmalloc isn't "always better", maybe it would
> be best to allow the algorithm to be selectable?  (And
> then we would also need to decide the default.)
> 

I can get you some results comparing the two tomorrow.

You have to make the distinction between the
"compression ratio" and the "effective compression".
The compression ratio is the same since the compression
algorithm, LZO, was changed.  The effective compression,
the ratio of stored compressed pages to allocator pool
pages, is different between the two, especially for
allocation sets > PAGE_SIZE/2.

What the numbers will tell you is that for allocations sets
< PAGE_SIZE/2 xcfmalloc is a little worse (~2% greater
overhead).  But for allocation sets > PAGE_SIZE/2,
xcfmalloc has up to a 50% advantage over xvmalloc.

As far as performance numbers, all I can see is that
the throughput is the same between the two.  I'm not
sure how to get, for example, and cycles delta
between the two.

I would be difficult to make it selectable because the
function signatures (and some concepts) aren't the same.
You can see the changes that were required in the patch
2/3.

> (Hopefully Nitin will have a chance to comment, since he
> has much more expertise in compression than I do.)
> 
> Thanks,
> Dan

Thanks,
Seth

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ