[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160330221233.GA6736@bbox>
Date: Thu, 31 Mar 2016 07:12:33 +0900
From: Minchan Kim <minchan@...nel.org>
To: Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: zram: per-cpu compression streams
On Wed, Mar 30, 2016 at 05:34:19PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> sorry for long reply.
>
> On (03/28/16 12:21), Minchan Kim wrote:
> [..]
> > group_reporting
> > buffer_compress_percentage=50
> > filename=/dev/zram0
> > loops=10
>
> I used a bit different script. no `buffer_compress_percentage' option,
> because it provide "a mix of random data and zeroes"
Normally, zram's compression ratio is 3 or 2 so I used it.
Hmm, isn't it more real practice usecase?
If we don't use buffer_compress_percentage, what's the content in the buffer?
>
> buffer_compress_percentage=int
> If this is set, then fio will attempt to provide IO buffer content
> (on WRITEs) that compress to the specified level. Fio does this by
> providing a mix of random data and zeroes
>
> and I also used scramble_buffers=0. but default scramble_buffers is
> true, so
>
> scramble_buffers=bool
> If refill_buffers is too costly and the target is using data
> deduplication, then setting this option will slightly modify the IO
> buffer contents to defeat normal de-dupe attempts. This is not
> enough to defeat more clever block compression attempts, but it will
> stop naive dedupe of blocks. Default: true.
>
> hm, but I guess it's not enough; fio probably will have different
> data (well, only if we didn't ask it to zero-fill the buffers) for
> different tests, causing different zram->zsmalloc behaviour. need
> to check it.
>
>
> > Hmm, Could you retest to who how the benefit is big?
>
> sure. the results are:
>
> - seq-read
> - rand-read
> - seq-write
> - rand-write (READ + WRITE)
> - mixed-seq
> - mixed-rand (READ + WRITE)
>
> TEST 4 streams 8 streams per-cpu
>
> #jobs1
> READ: 2665.4MB/s 2515.2MB/s 2632.4MB/s
> READ: 2258.2MB/s 2055.2MB/s 2166.2MB/s
> WRITE: 933180KB/s 894260KB/s 898234KB/s
> WRITE: 765576KB/s 728154KB/s 746396KB/s
> READ: 563169KB/s 541004KB/s 551541KB/s
> WRITE: 562660KB/s 540515KB/s 551043KB/s
> READ: 493656KB/s 477990KB/s 488041KB/s
> WRITE: 493210KB/s 477558KB/s 487600KB/s
> #jobs2
> READ: 5116.7MB/s 4607.1MB/s 4401.5MB/s
> READ: 4401.5MB/s 3993.6MB/s 3831.6MB/s
> WRITE: 1539.9MB/s 1425.5MB/s 1600.0MB/s
> WRITE: 1311.1MB/s 1228.7MB/s 1380.6MB/s
> READ: 1001.8MB/s 960799KB/s 989.63MB/s
> WRITE: 998.31MB/s 957540KB/s 986.26MB/s
> READ: 921439KB/s 860387KB/s 899720KB/s
> WRITE: 918314KB/s 857469KB/s 896668KB/s
> #jobs3
> READ: 6670.9MB/s 6469.9MB/s 6548.8MB/s
> READ: 5743.4MB/s 5507.8MB/s 5608.4MB/s
> WRITE: 1923.8MB/s 1885.9MB/s 2191.9MB/s
> WRITE: 1622.4MB/s 1605.4MB/s 1842.2MB/s
> READ: 1277.3MB/s 1295.8MB/s 1395.2MB/s
> WRITE: 1276.9MB/s 1295.4MB/s 1394.7MB/s
> READ: 1152.6MB/s 1137.1MB/s 1216.6MB/s
> WRITE: 1152.2MB/s 1137.6MB/s 1216.2MB/s
> #jobs4
> READ: 8720.4MB/s 7301.7MB/s 7896.2MB/s
> READ: 7510.3MB/s 6690.1MB/s 6456.2MB/s
> WRITE: 2211.6MB/s 1930.8MB/s 2713.9MB/s
> WRITE: 2002.2MB/s 1629.8MB/s 2227.7MB/s
Your case is 40% win. It's huge, Nice!
I tested with your guide line(i.e., no buffer_compress_percentage,
scramble_buffers=0) but still 10% enhance in my machine.
Hmm,,,
How about if you test my fio job.file in your machine?
Still, it's 40% win?
Also, I want to test again in your exactly same configuration.
Could you tell me zram environment(ie, disksize, compression
algorithm) and share me your job.file of fio?
Thanks.
Powered by blists - more mailing lists