lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Mar 2016 12:21:35 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>
CC:	Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: zram: per-cpu compression streams

Hi Sergey,

On Fri, Mar 25, 2016 at 10:47:06AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> 
> On (03/25/16 08:41), Minchan Kim wrote:
> [..]
> > >  Test #10 iozone -t 10 -R -r 80K -s 0M -I +Z
> > >    Initial write        3213973.56      2731512.62      4416466.25*
> > >          Rewrite        3066956.44*     2693819.50       332671.94
> > >             Read        7769523.25*     2681473.75       462840.44
> > >          Re-read        5244861.75      5473037.00*      382183.03
> > >     Reverse Read        7479397.25*     4869597.75       374714.06
> > >      Stride read        5403282.50*     5385083.75       382473.44
> > >      Random read        5131997.25      5176799.75*      380593.56
> > >   Mixed workload        3998043.25      4219049.00*     1645850.45
> > >     Random write        3452832.88      3290861.69      3588531.75*
> > >           Pwrite        3757435.81      2711756.47      4561807.88*
> > >            Pread        2743595.25*     2635835.00       412947.98
> > >           Fwrite       16076549.00     16741977.25*    14797209.38
> > >            Fread       23581812.62*    21664184.25      5064296.97
> > >  =          real         0m44.490s       0m44.444s       0m44.609s
> > >  =          user          0m0.054s        0m0.049s        0m0.055s
> > >  =           sys          0m0.037s        0m0.046s        0m0.148s
> > >  
> > >  
> > >  so when the number of active tasks become larger than the number
> > >  of online CPUS, iozone reports a bit hard to understand data. I
> > >  can assume that since now we keep the preemption disabled longer
> > >  in write path, a concurrent operation (READ or WRITE) cannot preempt
> > >  current anymore... slightly suspicious.
> > >  
> > >  the other hard to understand thing is why do READ-only tests have
> > >  such a huge jitter. READ-only tests don't depend on streams, they
> > >  don't even use them, we supply compressed data directly to
> > >  decompression api.
> > >  
> > >  may be better retire iozone and never use it again.
> > >  
> > >  
> > >  "118 insertions(+), 238 deletions(-)" the patches remove a big
> > >  pile of code.
> > 
> > First of all, I appreciate you very much!
> 
> thanks!
> 
> > At a glance, on write workload, huge win but worth to investigate
> > how such fluctuation/regression happens on read-related test
> > (read and mixed workload).
> 
> yes, was going to investigate in more details but got interrupted,
> will return back to it today/tomorrow.
> 
> > Could you send your patchset? I will test it.
> 
> oh, sorry, sure! attached (because it's not a real patch submission
> yet, but they look more or less ready I guess).
> 
> patches are against next-20160324.

Thanks, I tested your patch with fio.
My laptop is 8G ram, 4 CPU.
job file is here.

= 
[global]
bs=4k
ioengine=sync
direct=1
size=100m
numjobs=${NUMJOBS}
group_reporting
buffer_compress_percentage=50
filename=/dev/zram0
loops=10

[seq-read]
rw=read
stonewall

[rand-read]
rw=randread
stonewall

[seq-write]
rw=write
stonewall

[rand-write]
rw=randwrite
stonewall

[mixed-seq]
rw=rw
stonewall

[mixed-rand]
rw=randrw
stonewall
=

= old(ie, spinlock) version =

1) NR_PROCESS:8 NR_STREAM: 1

seq-read: (groupid=0, jobs=8): err= 0: pid=23148: Mon Mar 28 12:07:15 2016
  read : io=8000.0MB, bw=5925.1MB/s, iops=1517.4K, runt=  1350msec
rand-read: (groupid=1, jobs=8): err= 0: pid=23156: Mon Mar 28 12:07:15 2016
  read : io=8000.0MB, bw=4889.1MB/s, iops=1251.9K, runt=  1636msec
seq-write: (groupid=2, jobs=8): err= 0: pid=23164: Mon Mar 28 12:07:15 2016
  write: io=8000.0MB, bw=914898KB/s, iops=228724, runt=  8954msec
rand-write: (groupid=3, jobs=8): err= 0: pid=23172: Mon Mar 28 12:07:15 2016
  write: io=8000.0MB, bw=913368KB/s, iops=228342, runt=  8969msec
mixed-seq: (groupid=4, jobs=8): err= 0: pid=23180: Mon Mar 28 12:07:15 2016
  read : io=4003.1MB, bw=881152KB/s, iops=220287, runt=  4653msec
mixed-rand: (groupid=5, jobs=8): err= 0: pid=23189: Mon Mar 28 12:07:15 2016
  read : io=4003.5MB, bw=837491KB/s, iops=209372, runt=  4895msec


2) NR_PROCESS:8 NR_STREAM: 8

seq-read: (groupid=0, jobs=8): err= 0: pid=23248: Mon Mar 28 12:07:57 2016
  read : io=8000.0MB, bw=5847.1MB/s, iops=1497.8K, runt=  1368msec
rand-read: (groupid=1, jobs=8): err= 0: pid=23256: Mon Mar 28 12:07:57 2016
  read : io=8000.0MB, bw=4778.1MB/s, iops=1223.5K, runt=  1674msec
seq-write: (groupid=2, jobs=8): err= 0: pid=23264: Mon Mar 28 12:07:57 2016
  write: io=8000.0MB, bw=1644.7MB/s, iops=420879, runt=  4866msec
rand-write: (groupid=3, jobs=8): err= 0: pid=23272: Mon Mar 28 12:07:57 2016
  write: io=8000.0MB, bw=1507.5MB/s, iops=385905, runt=  5307msec
mixed-seq: (groupid=4, jobs=8): err= 0: pid=23280: Mon Mar 28 12:07:57 2016
  read : io=4003.1MB, bw=1225.1MB/s, iops=313839, runt=  3266msec
mixed-rand: (groupid=5, jobs=8): err= 0: pid=23288: Mon Mar 28 12:07:57 2016
  read : io=4003.5MB, bw=1098.4MB/s, iops=281097, runt=  3646msec


3) NR_PROCESS:8 NR_STREAM: 16

seq-read: (groupid=0, jobs=8): err= 0: pid=23350: Mon Mar 28 12:08:38 2016
  read : io=8000.0MB, bw=5843.7MB/s, iops=1495.1K, runt=  1369msec
rand-read: (groupid=1, jobs=8): err= 0: pid=23358: Mon Mar 28 12:08:38 2016
  read : io=8000.0MB, bw=4810.6MB/s, iops=1231.6K, runt=  1663msec
seq-write: (groupid=2, jobs=8): err= 0: pid=23366: Mon Mar 28 12:08:38 2016
  write: io=8000.0MB, bw=1655.7MB/s, iops=423841, runt=  4832msec
rand-write: (groupid=3, jobs=8): err= 0: pid=23374: Mon Mar 28 12:08:38 2016
  write: io=8000.0MB, bw=1501.6MB/s, iops=384384, runt=  5328msec
mixed-seq: (groupid=4, jobs=8): err= 0: pid=23382: Mon Mar 28 12:08:38 2016
  read : io=4003.1MB, bw=1221.9MB/s, iops=312786, runt=  3277msec
mixed-rand: (groupid=5, jobs=8): err= 0: pid=23390: Mon Mar 28 12:08:38 2016
  read : io=4003.5MB, bw=1104.1MB/s, iops=282647, runt=  3626msec

= percpu =

1) NR_PROCESS:8

seq-read: (groupid=0, jobs=8): err= 0: pid=22804: Mon Mar 28 11:58:22 2016
  read : io=8000.0MB, bw=5610.1MB/s, iops=1436.2K, runt=  1426msec
rand-read: (groupid=1, jobs=8): err= 0: pid=22812: Mon Mar 28 11:58:22 2016
  read : io=8000.0MB, bw=4881.3MB/s, iops=1249.6K, runt=  1639msec
seq-write: (groupid=2, jobs=8): err= 0: pid=22820: Mon Mar 28 11:58:22 2016
  write: io=8000.0MB, bw=1814.6MB/s, iops=464399, runt=  4410msec
rand-write: (groupid=3, jobs=8): err= 0: pid=22829: Mon Mar 28 11:58:22 2016
  write: io=8000.0MB, bw=1647.9MB/s, iops=421833, runt=  4855msec
mixed-seq: (groupid=4, jobs=8): err= 0: pid=22837: Mon Mar 28 11:58:22 2016
  read : io=4003.1MB, bw=1275.2MB/s, iops=326433, runt=  3140msec
mixed-rand: (groupid=5, jobs=8): err= 0: pid=22846: Mon Mar 28 11:58:22 2016
  read : io=4003.5MB, bw=1119.3MB/s, iops=286519, runt=  3577msec

In my test, read is stable. It seems iozone or fs made noise in your test.
Benefit cause by per-cpu on write side is about 10% which is not huge
compared to your previous post.
Hmm, Could you retest to who how the benefit is big?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ