lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160331063416.GA3343@swordfish>
Date:	Thu, 31 Mar 2016 15:34:16 +0900
From:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: zram: per-cpu compression streams

Hello Minchan,

On (03/31/16 14:53), Minchan Kim wrote:
> Hello Sergey,
>
> > that's a good question. I quickly looked into the fio source code,
> > we need to use "buffer_pattern=str" option, I think. so the buffers
> > will be filled with the same data.
> > 
> > I don't mind to have buffer_compress_percentage as a separate test (set
> > as a local test option), but I think that using common buffer pattern
> > adds more confidence when we compare test results.
> 
> If we both uses same "buffer_compress_percentage=something", it's
> good to compare. The benefit of buffer_compress_percentage is we can
> change compression ratio easily in zram testing and see various
> test to see what compression ratio or speed affects the system.

let's start with "common data" (buffer_pattern=str), not common
compression ratio. buffer_compress_percentage=something is calculated
for which compression algorithm? deflate (zlib)? or it's something else?
we use lzo/lz4, common data is more predictable.

[..]
> > sure.
> 
> I tested with you suggested parameter.
> In my side, win is better compared to my previous test but it seems
> your test is so fast. IOW, filesize is small and loops is just 1.
> Please test filesize=500m loops=10 or 20.

that will require 5G zram, I don't have that much ram on the box so I'll
test later today on another box.

I split the device size between jobs. if I have 10 jobs, then the file
size of each job is DISK_SIZE/10; but in total jobs write/read DEVICE_SZ
bytes. jobs start with large 1 * DEVICE_SZ/1 files and go down to
10 * DEVICE_SZ/10 files.

> It can make your test more stable and enhance is 10~20% in my side.
> Let's discuss further once test result between us is consistent.

	-ss

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ